Skip to content

Instantly share code, notes, and snippets.

@neodelphis
Created June 3, 2019 13:38
Show Gist options
  • Save neodelphis/920ca21f2b5f3d6bc6ce7bb262ea303a to your computer and use it in GitHub Desktop.
Save neodelphis/920ca21f2b5f3d6bc6ce7bb262ea303a to your computer and use it in GitHub Desktop.
Rétropropagation du gradient dans le cadre d'une ultime couche Softmax
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Rétropropagation du gradient dans le cadre d'une ultime couche $Softmax$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cet article présente la rétropropagation du gradient dans un réseau neuronal monocouche complètement connecté, avec softmax comme fonction d'activation et l'entropie croisée comme fonction de coût. Minimiser cette fonction va se résumer dans notre cas à réduire la divergence de Kullback-Leibler qui est une mesure de la distance entre le résultat obtenu et le résultat souhaité."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Le but est d'optimiser une matrice de poids W pour que la prédiction d'appartenance de X à une classe soit la plus proche de la classe Y connue."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Présentation du contexte"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Graphe"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<br>\n",
"Ce que l'on connait:<br>\n",
"$X$ vecteur d'entrées de dimension (D)<br>\n",
"$y$ classe à laquelle apparatient le vecteur d'entrées, $y$ est un scalaire $\\in \\{1,\\cdots,C\\}$. On associe à $y$ un vecteur de dimension C $Y\\_one\\_hot$, qui est la version \"on hot encoded\" de $y$. Tous ses termes sont 0 sauf $Y\\_one\\_hot[y]=1$<br>\n",
"<br>\n",
"Ce que l'on cherche:<br>\n",
"$W$ matrice des poids de dimension (D,C). Où C représente le nombre de classes possibles<br>\n",
"<br>\n",
"Représentation générale du graphe des calculs effectués:<br>\n",
"\n",
"** 1 ** multiplication matricielle : $\\lambda = X.W$<br>\n",
"\n",
"$\\lambda$ vecteur logits de dimension (C)<br>\n",
"\n",
"** 2 ** $S=softmax(\\lambda)$<br>\n",
"\n",
"$S$ vecteur de dimension (C) qui donne la probabilité d'appartenance de X pour chacune des classes<br>\n",
"\n",
"** 3 ** Fonction de coût: $L = D_{KL}(Y\\_one\\_hot||S)$ où $Y\\_one\\_hot$ correspond à la répartition de probabilité pour la classe connue.<br>\n",
"\n",
"$L$ (Loss) scalaire (1)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Motivation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Le but est de trouver $$\\frac{\\partial L}{\\partial W}$$ Jacobien généralisé de L par rapport à W, pour pouvoir modifier W en utilisant le taux d'apprentissage:$$ W \\leftarrow W + learning\\_rate * \\frac{\\partial L}{\\partial W}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## La fonction $Softmax$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Définition"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
"Softmax, \\forall N \\in \\mathbb{N}^{*}\\\\\n",
"S\\colon \n",
"&\\mathbb{R}^{N} &&\\to \\mathbb{R}^{N} \\\\\n",
"&a = \n",
"\\begin{bmatrix}\n",
"a_1\\\\ \n",
"\\vdots \\\\ \n",
"a_n\n",
"\\end{bmatrix}\n",
"&&\\mapsto S(a) = \n",
"\\begin{bmatrix}\n",
"S_1\\\\ \n",
"\\vdots \\\\ \n",
"S_n\n",
"\\end{bmatrix}\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Chaque élément de $S(a)$ est défini par: $$S_i = \\frac{e^{a_i}}{ \\sum_{k=1}^{N} e^{a_k} }$$ $$\\forall i \\in \\{1,\\cdots,N\\}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Par souci de simplification d'écriture on a tendance à nommer de la même manière une fonction et son résultat. $S$ ici est selon le contexte la fonction $softmax$ ou un vecteur de taille N."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Caractéristiques"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\forall i, S_i \\in ]0,1]$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\sum_{i=1}^{N}S_i = 1$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"L'ordre des valeurs de `a` est conservé et la plus haute valeur ressort clairement. Par exemple pour `a = [1,0 2,0 5,0]`on aura `S(a) = [0,02 0,05 0,93]`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Softmax est comme une version \"soft\" de la fonction maximum, elle fait ressortir le maximum mais n'élimine pas complètement les autres valeurs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Instabilité numérique"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Le problème du cacul de la valeur de $softmax$ est que l'on divise facilement de très grands nombres entre eux, source d'instabilités numériques. Pour éviter cela on va ajouter une constante judicieusement choisie."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\frac{e^{\\lambda_{i}}}{\\sum_j e^{\\lambda_j}}\n",
"= \\frac{Ce^{\\lambda_{i}}}{C\\sum_j e^{\\lambda_j}}\n",
"= \\frac{e^{\\lambda_{i} + \\ln C}}{\\sum_j e^{\\lambda_j + \\ln C}}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Choix classique de $ln(C)$: valeur maximale du vecteur $\\lambda$ $$\\ln C = -\\max_j \\lambda_j$$"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in true_divide\n",
" This is separate from the ipykernel package so we can avoid doing imports until\n"
]
}
],
"source": [
"import numpy as np\n",
"logits = np.array([123, 456, 789]) # Exemple de 3 classes avec de larges scores\n",
"p = np.exp(logits) / np.sum(np.exp(logits)) # Plante la machine"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# Translation du vecteur logits pour que la plus haute valeur soit 0:\n",
"logits -= np.max(logits) # logits devient [-666, -333, 0]\n",
"p = np.exp(logits) / np.sum(np.exp(logits)) # Fonctionne correctement"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([5.75e-290, 2.40e-145, 1.00e+000])"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def my_round(value, N):\n",
" exponent = np.ceil(np.log10(value))\n",
" return 10**exponent*np.round(value*10**(-exponent), N)\n",
"my_round(p,3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dans notre cas de figure on calculera numériquement $$S=softmax(\\hat{\\lambda})=softmax(\\lambda)$$ où $\\hat{\\lambda}$ est une version translatée de $\\lambda$ pour éviter les instabilités numériques mais qui est égale à $S$. $\\hat{\\lambda}$ vecteur logits translaté de dimension (C)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gradient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On utilise la loi donnant la dérivée d'une composée de fonctions:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\frac{\\partial L}{\\partial W} = \\frac{\\partial L}{\\partial \\lambda} . \\frac{\\partial \\lambda}{\\partial W}\\\\\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ce qui nous donne en considérant les dimensions: $$(D,C) = (C) . (C,D,C)$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"où le \".\" correspond au produit matriciel généralisé aux tenseurs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Jacobien généralisé de $\\lambda$ par rapport à $W$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\lambda = X.W$$\n",
"$$J = \\frac{\\partial \\lambda}{\\partial W}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dimensions:\n",
"$X(D)\\\\\n",
"W(D,C)\\\\\n",
"\\lambda (C)\\\\\n",
"J (C,D,C)$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$J_{ijk} = \\frac{\\partial \\lambda_i}{\\partial w_{jk}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\text{où (i,j,k) varient selon (C,D,C)} $$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\\lambda_i=\\sum_{l=1}^{D}x_lw_{li}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
"J_{ijk} & = \\frac{\\partial}{\\partial w_{jk}}\\sum_{l=1}^{D}x_lw_{li}\\\\\n",
"& = \n",
" \\begin{cases} \n",
" x_j & \\text{si } i=k, \\forall j \\in[1,D] \\\\\n",
" 0 & \\text{si } i \\neq k\n",
" \\end{cases}\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Jacobien de $L$ par rapport à $W$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\frac{\\partial L}{\\partial W} = \\frac{\\partial L}{\\partial \\lambda} . \\frac{\\partial \\lambda}{\\partial W}\\\\\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
"\\frac{\\partial L}{\\partial w_{jk}} & = \\sum_{i=1}^{C}\\frac{\\partial L}{\\partial \\lambda_i}J_{ijk} \\\\\n",
"& = \\frac{\\partial L}{\\partial \\lambda_k}x_j\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Donc sous forme matricielle on obtient: $$\n",
"\\frac{\\partial L}{\\partial W} = X^T.\\frac{\\partial L}{\\partial \\lambda}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Expression de la dérivée de $L$ par rapport à $\\lambda$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fonction de coût"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$L = D_{KL}(Y\\_one\\_hot||S)$$ où $Y\\_one\\_hot$ correspond à la répartition de probabilité pour la classe connue et S les probabilités de la classe prédite par notre modèle."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"La divergence de Kullback-Leibler $D_{KL}$ correspond à une mesure de la similarité entre deux distributions. On peut l'écrire sous la forme $$D_{KL} = H(p,q)-H(p)$$ où p est la véritable distribution et q la répartition estimée. \n",
"$$\n",
"\\begin{align*}\n",
"&p_i = Y\\_one\\_hot_i = \n",
" \\begin{cases} \n",
" 1 & \\text{si } i=y \\\\\n",
" 0 & \\text{si } i \\neq y \\forall i \\in[1,C]\n",
" \\end{cases} \\\\\n",
"&q_i = S(\\lambda)_i\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Comme 100% des valeurs de p sont en y on a $H(p)=O$, d'où:\n",
"$$\n",
"\\begin{align*}\n",
"D_{KL} & = H(p,q) \\\\\n",
"& = -\\sum_{i}p_i lnq_i\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tous les termes de la somme sont nuls sauf pour $i=y$\n",
"$$\n",
"\\begin{align*}\n",
"L = D_{KL} & = - ln \\frac{e^{\\lambda_{y}}}{\\sum_j e^{\\lambda_j}} \\\\\n",
"& = -\\lambda_{y} + ln(\\sum_j e^{\\lambda_j})\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calcul de la dérivée de L"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Calcul prélimiaire: \n",
"$$\n",
"\\begin{align*}\n",
"\\frac{\\partial}{\\partial \\lambda_i}ln(\\sum_j e^{\\lambda_j}) & = \\frac{e^{\\lambda_{i}}}{\\sum_j e^{\\lambda_j}} \\\\\n",
"& = S_i\n",
"\\end{align*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On obtient une expression simple en fonction de $softmax(\\lambda)$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$\n",
"\\frac{\\partial L}{\\partial \\lambda_i} = \n",
" \\begin{cases} \n",
" S_y - 1 & \\text{si } i=y \\\\\n",
" S_i & \\text{si } i \\neq y \\forall i \\in[1,C]\n",
" \\end{cases}\n",
"$$ "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Gradient dW"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"En incorporant cette dernière formule à dW, Jacobien de $L$ par rapport à $W$, on obtient une expression du gradient facilement programmable."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```Python\n",
"def softmax_loss_naive(W, X, y, reg):\n",
" \"\"\"\n",
" Softmax loss function, naive implementation (with loops)\n",
"\n",
" Inputs have dimension D, there are C classes, and we operate on minibatches\n",
" of N examples.\n",
"\n",
" Inputs:\n",
" - W: A numpy array of shape (D, C) containing weights.\n",
" - X: A numpy array of shape (N, D) containing a minibatch of data.\n",
" - y: A numpy array of shape (N,) containing training labels; y[i] = c means\n",
" that X[i] has label c, where 0 <= c < C.\n",
" - reg: (float) regularization strength\n",
"\n",
" Returns a tuple of:\n",
" - loss as single float\n",
" - gradient with respect to weights W; an array of same shape as W\n",
" \"\"\"\n",
" # Initialize the loss and gradient to zero.\n",
" loss = 0.0\n",
" dW = np.zeros_like(W)\n",
" num_train = X.shape[0]\n",
"\n",
" for i in range(num_train):\n",
" logits = X[i].dot(W)\n",
" logits -= np.max(logits) # Pour éviter les instabilités axis = 1\n",
" correct_class_score = logits[y[i]]\n",
" loss += -correct_class_score + np.log(np.sum(np.exp(logits)))\n",
" \n",
" # Calcul du gradient: dérivée partielle de L par rapport à W\n",
" # Gradient de L par rapport aux logits\n",
" dL = np.exp(logits) / np.sum(np.exp(logits))\n",
" # Prise en compte du cas particulier dL[y[i]]\n",
" dL[y[i]] = dL[y[i]] - 1\n",
" # dW\n",
" dW += np.outer(X[i], dL)\n",
" \n",
" # Right now the loss is a sum over all training examples, but we want it\n",
" # to be an average instead so we divide by num_train.\n",
" loss /= num_train\n",
" dW /= num_train\n",
"\n",
" # Add regularization to the loss.\n",
" loss += reg * np.sum(W * W)\n",
" dW += 2* reg * W\n",
"\n",
" return loss, dW\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sur github une version vectorisée de cette fonction."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Références"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[CS231n: Convolutional Neural Networks for Visual Recognition](http://cs231n.github.io/linear-classify/#softmax)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Backpropagation for a Linear Layer - Justin Johnson](http://vision.stanford.edu/teaching/cs231n/handouts/linear-backprop.pdf)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Demystifying KL Divergence](https://towardsdatascience.com/demystifying-kl-divergence-7ebe4317ee68)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"toc-autonumbering": true
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment