Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save shagunsodhani/75dc31e3c7999ad4a1edf4f289deaa88 to your computer and use it in GitHub Desktop.
Save shagunsodhani/75dc31e3c7999ad4a1edf4f289deaa88 to your computer and use it in GitHub Desktop.
Notes for "Advances In Optimizing Recurrent Networks" paper

Advances In Optimizing Recurrent Networks

Introduction

  • Recurrent Neural Networks (RNNs) are very powerful at modelling sequences but they are not good at learning long-term dependencies.
  • The paper discusses the reasons behind this difficulty and some suggestions to mitigate it.
  • Link to the paper.

Optimization Difficulty

  • RNNs form a deterministic state variable ht as function of input observation and previous state.
  • Learnable parameters to decide what will be remembered about the past sequence.
  • Using local optimisation techniques like Stochastic Gradient Descent (SGD) are unlikely to find optimal values of tunable parameters
  • When computations performed by RNN are unfolded through time, a deep Neural Network with shared weights is realised.
  • The cost function of this deep network depends on the output of hidden layers.
  • Gradient descent updates could "explode" (become very large) or "vanish" (become very small).

Training Recurrent Networks

  • Clip Gradient - when the norm of the gradient vector (g) is above a threshold, update is done in direction of threshold.g/||g||. This normalisation implements a simple form of second-order normalisation (the second-order derivate will also be large in regions of exploding gradient).

  • Use a leaky integration state-to-state map: ht, i = αiht-1, i + (1-αi)Fi(ht-1, xt)

    Different values of α allow a different amount of the previous state to "leak" through the unfolded layers to further in time. This simply expands the time-scale of vanishing gradients and not totally remove them.

  • Use output probability models like Restricted Boltzmann Machine or NADE to capture higher order dependencies between variables in case of multivariate prediction.

  • By using rectifier non-linearities, the gradient on hidden units becomes sparse and these sparse gradients help the hidden units to specialise. The basic idea is that if the gradient is concentrated in fewer paths (in the unfolded computational graph) the vanishing gradient effect would be limited.

  • A simplified Nesterov Momentum rule is proposed to allow storing past velocities for a longer time while actually using these velocities more conservatively. The new formulation is also easier to implement.

Results

  • SGD with these optimisations outperforms a vanilla SGD.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment