共享&作用域
当我们正在开发一个较复杂的大工程时,提高编程效率和运行效率会成为需求,我们可以通过作用域,以及共享变量来实现这点。即通过tf.variable_scope()和tf.get_variable()。
tf.get_variable() 使得我们可以共享变量,比如对2幅图进行同样的NN计算,计算时NN中的参数是共享的,即在内存中只有一份,而不必为每个计算分别建立NN。而tf.variable_scope() 使得我们可以通过加前缀的方式形成域来进行同名变量的隔离。通过scope.reuse_variables,我们又可以在不同域之间共享同名变量。名称作用域name_scope可以被开启并添加到一个变量作用域中,他们只会影响到ops的名称,而不会影响到变量.
下面是一个两层卷积NN的一个示例。
#!python import tensorflow as tf #a two layers nn demo #a conv->relu layer struct def conv_relu(input, kernel_shape, bias_shape): # Create variable named "weights". weights = tf.get_variable("weights", kernel_shape, initializer=tf.random_normal_initializer()) # Create variable named "biases". biases = tf.get_variable("biases", bias_shape, initializer=tf.constant_intializer(0.0)) conv = tf.nn.conv2d(input, weights, strides=[1, 1, 1, 1], padding='SAME') return tf.nn.relu(conv + biases) def image_classification (input_image): with tf.variable_scope("conv1"): # Variables created here will be named "conv1/weights", "conv1/biases". relu1 = conv_relu(input_images, [5, 5, 32, 32], [32]) with tf.variable_scope("conv2"): # Variables created here will be named "conv2/weights", "conv2/biases". return conv_relu(relu1, [5, 5, 32, 32], [32]) with tf.variable_scope("image_classification") as scope: result1 = image_classification(image1) scope.reuse_variables() result2 = image_classification(image2)