Skip to main content

3.7 Gradients in a Deep Neural Network

In many machine learning applications, we find good model parameters by performing gradient descent, which relies on the fact that we can compute the gradient of a learning objective with respect to the parameters of the model. For a given objective function, we can obtain the gradient with respect to the model parameters using calculus and applying the chain rule. We already seen the gradient of a squared loss with respect to the parameters of a linear regression model.

Consider the function
$f(x)=\sqrt{(x^2+exp(x^2)}+cos(x^2+exp(x^2)$
By application of the chain rule, and noting that differentiation is linear,we compute the gradient
$\frac{\mathrm{d} f}{\mathrm{d} x}=\frac{2x + 2x\, exp(x^2)}{2\,\sqrt{x+exp(x^2)}}-sin(x^2+exp(x^2))(2x+exp(x^2)2x)$

Writing out the gradient in this explicit way is often impractical since it often results in a very lengthy expression for a derivative. In practice,it means that, if we are not careful, the implementation of the gradient could be significantly more expensive than computing the function, which imposes unnecessary overhead. For training deep neural network models, the back propagation algorithm (Kelley, 1960; Bryson, 1961; Dreyfus, 1962; Rumelhart et al., 1986) is an efficient way to compute the gradient of an error function with respect to the parameters of the model.

An area where the chain rule is used to an extreme is deep learning, where the function value $y$ is computed as a many-level function composition
$(f_k\circ f_{k-1}\circ \cdots f_1)(x)=f_k(f_{k-1}( \cdots f_1(x))\cdots))$

where $x$ are the inputs (e.g., images), $y$ are the observations (e.g., class labels), and every function $f_i, i = 1,\ldots,K$, possesses its own parameters.

In neural network with multipile layers we have functions $f_i(x_{i-1})=\sigma(A_{i-1}x_{i-1}+b_{i-1})$ in the $i$th layer.Here $x_{i-1}$ is the output of layer $i -1$and $\sigma$ an activation function, such as the logistic sigmoid $\frac{1}{1+e^x}$ , tanh or a rectified linear unit (ReLU). In order to train these models, we require the gradient of a loss function $L$ with respect to all model parameters $A_j , b_j$ for  $j = 1,\ldots,K$. This also requires us to compute the gradient of $L$ with respect to the inputs of each layer. For example, if we have inputs $x$ and observations $y$ and a network structure defined by
$f_0=x$
$f_i=\sigma_i(A_{i-1}f_{i-1}+b{i-1}), i =1,\ldots,K$
we may be interested in finding $A_j , b_j$  for$ j = 0,\ldots,K - 1$, such that the squared loss
$L(\theta)=\left \|y-f_k(\theta,x)\right\|^2$
is minimized , where $\theta=\{A_0,b_0,\ldots,A_{K-1},b_{K-1}\}$
To obtain the gradients with respect to the parameter set $\theta$, we require the partial derivatives of $L$ with respect to the parameters $\theta_j=\{A_j,b_j\}$ of each layer $j=0,\ldots,K-1$. The chain rule allows us to determine the partial derivatives as

The orange terms are partial derivatives of the output of a layer with respect to its inputs, whereas the blue terms are partial derivatives of the output of a layer with respect to its parameters. Assuming, we have already computed the partial derivatives $\frac{\partial L}{\partial \theta_{i+1}}$, then most of the computation can be reused to compute $\frac{\partial L}{\partial \theta_i}$,The additional terms that we need to compute are indicated by the boxes. These gradients are then passed backwards through the network.


Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper Question Paper July 2021 and evaluation scheme Question Paper June 2022 and evaluation scheme Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping and Matri

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(A∪B)=P(A)+P(B)$  wh

1.1 Solving system of equations using Gauss Elimination Method

Elementary Transformations Key to solving a system of linear equations are elementary transformations that keep the solution set the same, but that transform the equation system into a simpler form: Exchange of two equations (rows in the matrix representing the system of equations) Multiplication of an equation (row) with a constant  Addition of two equations (rows) Add a scalar multiple of one row to the other. Row Echelon Form A matrix is in row-echelon form if All rows that contain only zeros are at the bottom of the matrix; correspondingly,all rows that contain at least one nonzero element are on top of rows that contain only zeros. Looking at nonzero rows only, the first nonzero number from the left pivot (also called the pivot or the leading coefficient) is always strictly to the right of the  pivot of the row above it. The row-echelon form is where the leading (first non-zero) entry of each row has only zeroes below it. These leading entries are called pivots Example: $\begin