Skip to content

Instantly share code, notes, and snippets.

@bhavikngala
Last active April 26, 2019 19:53
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bhavikngala/ee0becff1071fae23267d81ae3f42bf9 to your computer and use it in GitHub Desktop.
Save bhavikngala/ee0becff1071fae23267d81ae3f42bf9 to your computer and use it in GitHub Desktop.
  1. "any preprocessing statistics (e.g. the data mean) must only be computed on the training data, and then applied to the validation/test data. E.g. computing the mean and subtracting it from every image across the entire dataset and then splitting the data into train/val/test splits would be a mistake." [http://cs231n.github.io/neural-networks-2/#datapre]

  2. Read [https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607]

  3. Search for good hyperparameters with random search (not grid search). Stage your search from coarse (wide hyperparameter ranges, training only for 1-5 epochs), to fine (narrower rangers, training for many more epochs). [https://cs231n.github.io/neural-networks-3/#summary]

  4. During training, monitor the loss, the training/validation accuracy, and if you’re feeling fancier, the magnitude of updates in relation to parameter values (it should be ~1e-3), and when dealing with ConvNets, the first-layer weights. [https://cs231n.github.io/neural-networks-3/#summary]

  5. Read answer: [https://stats.stackexchange.com/questions/352036/what-should-i-do-when-my-neural-network-doesnt-learn?answertab=votes#tab-top]

  6. Read CS231n CNNs: [https://cs231n.github.io/]

  7. Very imp: [https://forums.fast.ai/t/things-jeremy-says-to-do/36682]

  8. Mixed precision training - FP16 + FP32: check details in blog [https://medium.com/@sureshr/loc2vec-a-fast-pytorch-implementation-2b298072e1a7]

  9. Transfer learning even when datasets are not related. Read: [https://arxiv.org/abs/1902.07208]

  10. Backpropagation explaination: [http://neuralnetworksanddeeplearning.com/chap2.html]

  11. "We show that transfer learning gives only minimal final performance gains, but significantly improves convergence speed. In this, weight scaling plays a key role: we identify Mean Var initialization as a simple way to use only scaling information from the pretrained weights, but to achieve substantial convergance gains, equal to i.i.d. sampling from the full empirical distribution. We also compare the representations obtained through the imagenet pretraining, Mean Var Init, and randdom Init. At lower layers, the different approaches yield very different representations, with Random Init and Mean Var Init not learning Gabor filters." [https://arxiv.org/abs/1902.07208]. So I think even the network structure is a different from the network on which Imagenet is trained, we can still take the mean-var stats from that network and initialize the new network weights.

  12. Find a good learning rate: [https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html]

  13. 1-cycle policy: [https://sgugger.github.io/the-1cycle-policy.html#the-1cycle-policy]

  14. NN in numpy: [https://sgugger.github.io/a-simple-neural-net-in-numpy.html#a-simple-neural-net-in-numpy]

  15. Convolution in numpy: [https://sgugger.github.io/convolution-in-depth.html#convolution-in-depth]

  16. Fast.ai — An infinitely customizable training loop with Sylvain Gugger [https://www.youtube.com/watch?v=roc-dOSeehM&feature=youtu.be&t=1]

  17. Resnet fast.ai - [https://twitter.com/jeremyphoward/status/1115036889818341376?s=08] [https://twitter.com/jeremyphoward/status/1115044602606538757?s=08]

  18. Linux commands [https://www.youtube.com/playlist?list=PLdfA2CrAqQ5kB8iSbm5FB1ADVdBeOzVqZ]

  19. Full stack deep learning: [https://fullstackdeeplearning.com/march2019]

  20. Andrej karpathy: Recipe for training NN - [https://karpathy.github.io/2019/04/25/recipe/]

TODO: Stochastic weights averaging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment