使用conv_transpose中出现的问题,即是解卷积,是卷积的一个逆过程。
sess = tf.Session()
batch_size = 3
output_shape = [batch_size, 8, 8, 128]
strides = [1, 2, 2, 1]
l = tf.constant(0.1, shape=[batch_size, 32, 32, 4])
w = tf.constant(0.1, shape=[7, 7, 128, 4])
h1 = tf.nn.conv2d_transpose(l, w, output_shape=output_shape, strides=strides, padding='SAME')
print sess.run(h1)
|
这段程序会有出现
报错
InvalidArgumentError: Conv2DCustomBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 4
[[Node: conv2d_transpose_6 = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](conv2d_transpose_6/output_shape, Const_25, Const_24)]]
这个主要是因为解卷积是卷积的逆过程,由于
tf.nn.conv2d_transpose(l, w, output_shape=output_shape, strides=strides, padding='SAME')
的outshape为(?,8,8,128),那么反过来用卷积退回去得到的输入是多少shape呢?
下面尝试一下
output = tf.constant(0.1, shape=output_shape)
expected_l = tf.nn.conv2d(output, w, strides=strides, padding='SAME')
print expected_l.get_shape()
得到的输出的shape是(3,4,4,4)
但是上一段程序
l = tf.constant(0.1, shape=[batch_size, 32, 32, 4])是给的shape为(?,32,32,4)
所以才会报错。当我们改变其shape为
l = tf.constant(0.1, shape=[batch_size, 4, 4, 4])
,那么这个时候就不会报错了!!