The convergence of gradient descent may be very slow if the curvature of the optimization surface is such that there are regions that are poorly scaled. The curvature is such that the gradient descent steps hops between the walls of the valley and approaches the optimum in small steps. The proposed tweak to improve convergence is to give gradient descent some memory.
Gradient descent with momentum (Rumelhart et al., 1986) is a method that introduces an additional term to remember what happened in the previous iteration. This memory dampens oscillations and smoothes out the gradient updates. Continuing the ball analogy, the momentum term emulates the phenomenon of a heavy ball that is reluctant to change directions.The idea is to have a gradient update with memory to implement a moving average. The momentum-based method remembers the update $\bigtriangledown x_i$ at each iteration $i$ and determines the next update as a linear combination of the current and previous gradients
\bigtriangleup x_i=x_{i}-x_{i-1}=\alpha \bigtriangleup x_{i-1}-\gamma_i((\bigtriangledown f)(x_{i-1}))^T$
where $\alpha \in [0, 1]$. Sometimes we will only know the gradient approximately.In such cases, the momentum term is useful since it averages out different noisy estimates of the gradient. One particularly useful way to obtain an approximate gradient is by using a stochastic approximation,
Comments
Post a Comment