Normalizing All Layers(II): Back-Propagation

时间:2022-03-08 04:17:08

This blog is notes extracted from http://saban.wang/2016/03/28/Normalizing-All-Layers%EF%BC%9A-Back-Propagation/
It mainly covers how to normalize layers in back-propagation.

Introduction

In the last post, we discussed how to make all neurons of a neural network to have normal gaussian distribution. However, as the Conclusion section claimed, we haven’t considered the back-propagation procedure. In fact, when we talk about the gradient vanishing or exploding problem, we usually refer to the gradients flow in the back-propagation procedure. Since this, the correct way seems to be normalizing the backward gradients of neurons, instead of the forward values.

In this post, we will discuss how to normalize all the gradients using a similar philosophy with the last post: for a given gradient dy∼N(0,I), normalizing the layer to make sure that dx is expected to have zero mean and one standard deviation.

Parametric Layer

Consider the back-propagate fomulation of Convolution and InnerProdcut layer,

dx=Wdy

we will get a similar strategy of normalizing each row of W to be on a 2 unit ball. Please note that here we normalize through the fan-out dimension of W, not the fan-in dimension in the forward propagation.

Activation Layers

One problem that can’t be avoided when calculating the formulations of activations is that we should not only assume the distribution of the gradients, but also the forward input of the activation, because the gradients of activations are usually dependent on the inputs. Here we assume that both the input x and the gradient dy follow the normal gaussian distribution N(0,I) , and they are independent with each other.

Relu

y=max(0,x)

Its backward gradients can be easily obtained:
dxi=dyi{10xi>0xi0.

When xN(0,I) , the gradient of the ReLU layer can be seen as a Bernoulli distribution with probability of 0.5, so the backward mean and standard deviation formulas are similar with those of Dropout layer,
E[dx]=0,

σ[dx]=12

Here the question comes, now we have two different standard deviations, one for forward values and one for backward gradients, which one should be used to normalize the ReLU layer? My tendency is to use the σ calculated by the backward gradients, because backward σ is the real murderer of gradient vanishing. Moreover, since the bias term is not involved in the backward propagation, it is a good manner to substract the mean 12π after ReLU activation to ensure zero mean.

Sigmoid

The backward gradient of Sigmoid activation is,

dx=y(1y)

From simulating, we can get E[dx]=0 and σ[dx]=0.2123
The same with ReLU, we should still minus the E[y]=0.5 after Sigmoid activation and use the σ calculated by backward gradients, 0.2123.

Pooling Layer

  1. Avarage Pooling
    3x3: std is 19 and 14 for 2x2.
  2. Max Pooling
    3x3: std is 13 and 12 for 2x2.

Dropout Layer

The backward formula for Dropout layer is almost the same with the forward one, we should still divide the preserved values by q to achieve 1 std for both forward and backward procedure.

Conclusion

In this post, we have discussed the normalization strategy that serves the gradient flow of the backward propagation. The standard deviations of the gradients in the morden CNN are recorded here. However, when we are using the std of backward gradients, the forward value scale would not be controlled well. Inhomogeneous(非齐次) activations, such as sigmoid and tanh, are not suitable for this method because their domain may not cover a sufficient non-linear part of the activation.

So maybe a good choice is to use a separate scaling method for forward and backward propagation? This idea is conflict with the back-propagation algorithm, so we should still carefully examine it through experiment.