TensorFlow/TFLearn: ValueError:不能给张量u' targetsdata /Y:0',它有形状的形状(256,400,400)的输入值。,64)

时间:2023-02-06 13:48:48

I'd like to make a ConvNet with the same size of output as one of input. So, I implemented it using TFLearn library. Because I just wanted a simple example satisfying those purpose, I set only one convolution layer with zero-padding for the same output size as the input. The followings are the codes:

我想做一个和输入一样大小的卷积。因此,我使用TFLearn库实现了它。因为我只想要一个简单的例子来满足这些目的,我只设置了一个与输入相同的输出大小为零的卷积层。以下是代码:

X = X.reshape([-1, 400, 400, 1])
Y = Y.reshape([-1, 400, 400, 1])
testX = testX.reshape([-1, 400, 400, 1])
testY = testY.reshape([-1, 400, 400, 1])
X, mean = du.featurewise_zero_center(X)
testX = du.featurewise_zero_center(testX, mean)


# Building a Network
net = tflearn.input_data(shape=[None, 400, 400, 1])
net = tflearn.conv_2d(net, 64, 3, padding='same', activation='relu', bias=False)
sgd = tflearn.SGD(learning_rate=0.1, lr_decay=0.96, decay_step=300)
net = tflearn.regression(net, optimizer='sgd',
                     loss='categorical_crossentropy',
                     learning_rate=0.1)
# Training
model = tflearn.DNN(net, checkpoint_path='model_network',
                max_checkpoints=10, tensorboard_verbose=3)
model.fit(X, Y, n_epoch=100, validation_set=(testX, testY),
      show_metric=True, batch_size=256, run_id='network_test')

However, these codes yield an error

然而,这些代码产生了一个错误。

ValueError: Cannot feed value of shape (256, 400, 400) for Tensor u'TargetsData/Y:0', which has shape '(?, 64)'

I've searched and checked some documents but I can't seem to get this work.

我搜索了一些文件,但似乎无法完成这项工作。

1 个解决方案

#1


1  

The problem is that your convnet output has a shape of (None, 64), but your are giving target data (labels) with a shape of (None, 400, 400). I am not sure what you want to do with your code, are you trying to do some auto-encoding? or is it for a classification task?

问题是,您的convnet输出有一个(None, 64)的形状,但是您的目标数据(标签)的形状是(None, 400,400)。我不确定你想用你的代码做什么,你想做一些自动编码吗?还是分类任务?

For auto-encoder, the following is a convolutional auto encoder for MNIST, you can just adapt it with your own data and change input_data shape:

对于自动编码器,以下是一款用于MNIST的卷积自动编码器,您可以只使用自己的数据和更改input_data形状来修改它:

from __future__ import division, print_function, absolute_import

import numpy as np
import matplotlib.pyplot as plt
import tflearn
import tflearn.data_utils as du

# Data loading and preprocessing
import tflearn.datasets.mnist as mnist
X, Y, testX, testY = mnist.load_data(one_hot=True)

X = X.reshape([-1, 28, 28, 1])
testX = testX.reshape([-1, 28, 28, 1])
X, mean = du.featurewise_zero_center(X)
testX = du.featurewise_zero_center(testX, mean)

# Building the encoder
encoder = tflearn.input_data(shape=[None, 28, 28, 1])
encoder = tflearn.conv_2d(encoder, 16, 3, activation='relu')
encoder = tflearn.max_pool_2d(encoder, 2)
encoder = tflearn.conv_2d(encoder, 8, 3, activation='relu')
decoder = tflearn.upsample_2d(encoder, 2)
decoder = tflearn.conv_2d(encoder, 1, 3, activation='relu')

# Regression, with mean square error
net = tflearn.regression(decoder, optimizer='adam', learning_rate=0.001,
                         loss='mean_square', metric=None)

# Training the auto encoder
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit(X, X, n_epoch=10, validation_set=(testX, testX),
          run_id="auto_encoder", batch_size=256)

# Encoding X[0] for test
print("\nTest encoding of X[0]:")
# New model, re-using the same session, for weights sharing
encoding_model = tflearn.DNN(encoder, session=model.session)
print(encoding_model.predict([X[0]]))

# Testing the image reconstruction on new data (test set)
print("\nVisualizing results after being encoded and decoded:")
testX = tflearn.data_utils.shuffle(testX)[0]
# Applying encode and decode over test set
encode_decode = model.predict(testX)
# Compare original images with their reconstructions
f, a = plt.subplots(2, 10, figsize=(10, 2))
for i in range(10):
    a[0][i].imshow(np.reshape(testX[i], (28, 28)))
    a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
f.show()
plt.draw()
plt.waitforbuttonpress()

#1


1  

The problem is that your convnet output has a shape of (None, 64), but your are giving target data (labels) with a shape of (None, 400, 400). I am not sure what you want to do with your code, are you trying to do some auto-encoding? or is it for a classification task?

问题是,您的convnet输出有一个(None, 64)的形状,但是您的目标数据(标签)的形状是(None, 400,400)。我不确定你想用你的代码做什么,你想做一些自动编码吗?还是分类任务?

For auto-encoder, the following is a convolutional auto encoder for MNIST, you can just adapt it with your own data and change input_data shape:

对于自动编码器,以下是一款用于MNIST的卷积自动编码器,您可以只使用自己的数据和更改input_data形状来修改它:

from __future__ import division, print_function, absolute_import

import numpy as np
import matplotlib.pyplot as plt
import tflearn
import tflearn.data_utils as du

# Data loading and preprocessing
import tflearn.datasets.mnist as mnist
X, Y, testX, testY = mnist.load_data(one_hot=True)

X = X.reshape([-1, 28, 28, 1])
testX = testX.reshape([-1, 28, 28, 1])
X, mean = du.featurewise_zero_center(X)
testX = du.featurewise_zero_center(testX, mean)

# Building the encoder
encoder = tflearn.input_data(shape=[None, 28, 28, 1])
encoder = tflearn.conv_2d(encoder, 16, 3, activation='relu')
encoder = tflearn.max_pool_2d(encoder, 2)
encoder = tflearn.conv_2d(encoder, 8, 3, activation='relu')
decoder = tflearn.upsample_2d(encoder, 2)
decoder = tflearn.conv_2d(encoder, 1, 3, activation='relu')

# Regression, with mean square error
net = tflearn.regression(decoder, optimizer='adam', learning_rate=0.001,
                         loss='mean_square', metric=None)

# Training the auto encoder
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit(X, X, n_epoch=10, validation_set=(testX, testX),
          run_id="auto_encoder", batch_size=256)

# Encoding X[0] for test
print("\nTest encoding of X[0]:")
# New model, re-using the same session, for weights sharing
encoding_model = tflearn.DNN(encoder, session=model.session)
print(encoding_model.predict([X[0]]))

# Testing the image reconstruction on new data (test set)
print("\nVisualizing results after being encoded and decoded:")
testX = tflearn.data_utils.shuffle(testX)[0]
# Applying encode and decode over test set
encode_decode = model.predict(testX)
# Compare original images with their reconstructions
f, a = plt.subplots(2, 10, figsize=(10, 2))
for i in range(10):
    a[0][i].imshow(np.reshape(testX[i], (28, 28)))
    a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
f.show()
plt.draw()
plt.waitforbuttonpress()