Derivative of the softmax loss function

时间:2022-09-20 03:34:35

Back-propagation in a nerual network with a Softmax classifier, which uses the Softmax function:
\[\hat y_i=\frac{\exp(o_i)}{\sum_j \exp(o_j)}\]

This is used in a loss function of the form:
\[\mathcal{L}=-\sum_j{y_j\log \hat y_j}\]
where \(o\) is a vector. we need the derivate of \(\mathcal{L}\) with respect to \(o\).

Derivative of the softmax function

if \(i=j\),
\[\frac{\partial \hat y_j}{\partial o_i}=\frac{\exp(o_i)\times \sum_i \exp(o_i) - \exp(o_i)\exp(o_i)}{(\sum_i \exp(o_i))^2}=\hat y_i(1-\hat y_i)\]
if \(i\ne j\),

\[\frac{\partial \hat y_j}{\partial o_i}=\frac{0 - \exp(o_i)\exp(o_j)}{(\sum_i \exp(o_i))^2}=-\hat y_i \hat y_j\]

These two part can be conveniently combined using a construct called Kronecker Delta, so the definition of the gradient becomes,

\[\frac{\partial \hat y_j}{\partial o_i}=\hat y_i(\delta_{ij}-\hat y_i)\]

where the Kronecker delta \(\delta_{ij}\) is defined as:
\[\delta_{ij} = \begin{cases}
0 &\text{if } i \neq j, \\
1 &\text{if } i=j. \end{cases}\]

Derivative of Cross-entropy cost function

\[\begin{split}\frac{\partial L}{\partial o_i}&=-\sum_k y_k\frac{\partial \log \hat y_k}{\partial o_i}=-\sum_k y_k\frac{1}{\hat y_k}\frac{\partial \hat y_k}{\partial o_i}\\
&=-y_i(1-\hat y_i)-\sum_{k\neq i}y_k\frac{1}{\hat y_k}(-\hat y_k \hat y_i)\\
&=-y_i(1-\hat y_i)+\sum_{k\neq i}y_k \hat y_i\\
&=-y_i +y_i\hat y_i+ \hat y_i\sum_{k\ne i}{y_k}\\
&=\hat y_i\sum_k{y_k}-y_i\\
&=\hat y_i-y_i\end{split}\]

given that \(\sum_ky_k=1\)(as \(y\) is a vector with only one non-zero element, which is \(1\)).

finally, we get,
\[\frac{\partial \mathcal{L}}{\partial o_i} = \hat y_i - y_i\]