Skip to content

Instantly share code, notes, and snippets.

@evantancy
Last active April 16, 2022 12:50
Show Gist options
  • Save evantancy/232aac912115692b7b4595cb6d5a002d to your computer and use it in GitHub Desktop.
Save evantancy/232aac912115692b7b4595cb6d5a002d to your computer and use it in GitHub Desktop.
deep learning resources

Deep Learning Resources

Speed up training using FP16 https://www.youtube.com/watch?v=ks3oZ7Va8HU

Common Mistakes https://www.youtube.com/watch?v=O2wJ3tkc-TU

Data Augmentation https://www.youtube.com/watch?v=Zvd276j9sZ8

Torchvision Transforms Examples https://pytorch.org/vision/stable/auto_examples/plot_transforms.html#sphx-glr-auto-examples-plot-transforms-py

Data Augmentation - how to use GPU instead of CPU (T.Compose) https://github.com/pytorch/vision/releases/tag/v0.8.0

Data Augmentation with NVIDIA DALI using GPU https://towardsdatascience.com/diving-into-dali-1c30c28731c0

Normalization leads to pixel values outside [1,-1] https://discuss.pytorch.org/t/tensor-image-with-negative-values/91505/2

Occlusion Sensitivity https://www.kaggle.com/blargl/simple-occlusion-and-saliency-maps

Stanford CNNs https://cs231n.github.io/neural-networks-3/

Learning Rates and the creation of OneCycleLR https://spell.ml/blog/lr-schedulers-and-adaptive-optimizers-YHmwMhAAACYADm6F https://www.youtube.com/watch?v=bR7z2MA0p-o https://www.slideshare.net/SessionsEvents/competition-winning-learning-rates

GRUs https://blog.floydhub.com/gru-with-pytorch/

PPO ML with Phil https://www.youtube.com/watch?v=hlv79rcHws0

Pruning https://pytorch.org/tutorials/intermediate/pruning_tutorial.html

Yann LeCun’s Lectures https://www.college-de-france.fr/site/en-yann-lecun/course-2016-04-01-11h00.htm

David Silver’s Lectures https://deepmind.com/learning-resources/-introduction-reinforcement-learning-david-silver

CS230 https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks#architecture

Leslie Smith’s Equation

LR * weight decaybatch size * (1-momentum)=some constant If train loss << val loss = overfit If train loss >> val loss = underfit If train loss == val loss (approx) = best i.e. point of divergence Training loss will always go down, to saturate model parameters so who gives a shit

Fuck pandas use cuDF https://github.com/rapidsai/cudf https://github.com/rapidsai/cuspatial

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment