Skip to content

Instantly share code, notes, and snippets.

@bhavikngala
Last active January 6, 2023 23:04
Show Gist options
  • Star 30 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save bhavikngala/9fa75eff56c1867d9761ab2528841878 to your computer and use it in GitHub Desktop.
Save bhavikngala/9fa75eff56c1867d9761ab2528841878 to your computer and use it in GitHub Desktop.
This gist contains a list of important points from fast.ai "practical deep learning for coders" and "cutting edge deep learning for coders" MOOC

This gist contains a list of points I found very useful while watching the fast.ai "Practical deep learning for coders" and "Cutting edge deep learning for coders" MOOC by Jeremy Howard and team. This list may not be complete as I watched the video at 1.5x speed on marathon but I did write down as many things I found to be very useful to get a model working. A fair warning the points are in no particular order, you may find the topics are all jumbled up.

Before beginning, I want to thank Jeremy Howard, Rachel Thomas, and the entire fast.ai team in making this awesome practically oriented MOOC.

  1. Progressive image resolution training: Train the network on lower res first and then increase the resolution to get better performance. This can be thought of as transfer learning from the same dataset but at a different resolution. There is one paper by NVIDIA as well that used such an approach to train GANs.

  2. Cyclical learning rates: Gradually increasing the learning rate initially helps to avoid getting stuck in saddle points and explore entire(or more areas) of the loss landscape. [https://arxiv.org/abs/1506.01186]

  3. To reduce memory usage you can use lower precision floating points i.e. float16 instead of float32.

  4. Self-supervised learning - labels are inbuilt in data.

  5. For NLP tasks other that language models, you can use language model for transfer learning i.e. first train the model to be a language model and then add the actual functionality.

  6. When using transfer learning for NLP, in language model you can and should use entire dataset i.e. train and test datasets.

  7. Discriminative learning rates: use different learning rates for different layer groups in your network.

  8. Random forests can be used to find optimal hyperparameters.

  9. Use embeddings for categorical variables.

  10. For missing values - replace them with the median of the variable and add a new column of boolean variable saying missing=True/False.

  11. Wherever possible use transfer learning, it always increases performance.

  12. You can give a range to sigmoid function in last layer, it can increase the model performce.

out = sigmoid(x) * (max_range - min_range) + min_range
  1. Complexity is not measured by the number of parameters.

  2. You can use the data/time data given in the dataset to extract various useful information like the day of the week, the day of the month, the day of the year, year, month, week, is it a holiday, etc. It is useful in cases like detecting patterns like a certain event increased because it was a payday or holiday etc.

  3. More data is always useful.

  4. Too much dropout reduces the capacity of the network, experiment with multiple values.

  5. You can apply dropout to the output of the embedding layer too.

  6. Batch normalization helps to smoothen the loss landscape thus allowing higher learning rates.

  7. Reflection padding in case of zero padding works better.

  8. Larger kernel for the first layer in CNN is better since the number of channels is just 3(or very very less) in the beginning.

  9. t[None] can add a new dimension to the tensor, i.e. convert 3D tensor to 4D tensor.

  10. Use forward hooks to grab outputs of intermediate layers, simplifies implementations of pyramid style networks a lot.

  11. Ethics in AI: the privileged are processed by people and the poor are processed by algorithms - Cathy O'Neil

  12. When using transfer learning, you should use the stats of the dataset on which the model was trained to normalization your dataset.

  13. Paper read: Visualizing loss landscape of neural networks. [https://arxiv.org/abs/1712.09913]

  14. Densenet works very well for smaller datasets and on segmentation task. Resnet works very well on the segmentation task as well.

  15. You can apply modern methods on old papers and get SOTA results.

  16. A new UNET style network: resnet34 + subpixel convolutional upsampling.

  17. Subpixel convolutions for upsampling: a lot of improvement in removing checkerboard artifacts.

  18. Pretrained discriminator and generator in GAN.

  19. Spectral normalization in GAN.

  20. Don't use momentum in GANs, they don't like it.

  21. Loss value for generator and discriminator should converge, the only way to confirm GAN training is by visual inspection.

  22. Perceptual losses for style transfer and super resolutions.

  23. Say there is a network with complex loss function or a loss function which requires intermediate layer outputs, then do this:

class SomeLoss(nn.Module):
    def __init__(self, network, ...):
        self.net = network
        self.hooks = ... # apply hooks to the networks to get intermediate layers outputs
        # additional statements
        
    def forward(self, x, target):
        y_hat = self.net(x)
        # intermediate outputs are in self.hooks
        # compute the loss
        return loss
  1. Gradual unfreezing: Take trained model -> replace last layers -> fine tune last layer -> fine tune earlier layers.

  2. Five steps to avoid overfitting: More data -> data augmentation -> generalizable architecture -> regularization -> reduce architecture complexity.

  3. Use lambda functions to reduce lines of code wherever possible.

  4. Functions should be 5 lines or less wherever possible.

  5. Python debugger: pdb - useful commands [s, n, l, c, u, h, p].

  6. In case of multiple losses, find a multiplier to make all the losses approximately equal.

  7. Batch Norm after ReLU makes better sense since BN normalizes activations and ReLU after BN will shift mean and var.

  8. BN should not be used right after the dropout layer.

  9. Receptive field.

  10. Chunk size in pandas.dataframe to get iterator on large datasets.

  11. NLP tokenization: the beginning of sentence token, field token, when converting to UPPER case to lower case then add a token denoting UPPER case before the word.

  12. Limit vocabulary to ~60000 words, remove tokens that do not appear more than 2 times.

  13. For NLP tasks, the model can be trained on a subset of Wikipedia articles. Model pretraining.

  14. wget -r

  15. Command line tools can be run in jupyter notebook by placing ! before them.

  16. Since sequences cannot be randomly shuffled, we can vary the length of the sequence to add randomness.

  17. perplexity = exp(cross_entropy)

  18. Accuracy can be used in NLP as a metric.

  19. Don't implement a paper mindlessly. You can have ideas that the authors didn't have.

  20. Paper read: A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay. [https://arxiv.org/abs/1803.09820]

  21. Google's Fire library.

  22. VNC port forwarding to access jupyter notebook on servers.

  23. !pip install git+URL for installing lib from git.

  24. To free CNN from the input image size use adaptive average pooling.

  25. Using bi-directional LSTM in the seq2seq model improves performance. Teacher enforcing and attention.

  26. In high dimensional spaces, everything is on the edge, and thus distance does not matter but the angle matters. Thus cosine similarity loss is way better that L1/L2 loss.

  27. python nmslib for the nearest neighbor query.

  28. Get word vector of imagenet classes from wordnet -> train imagenet to predict word vectors -> now you have a search engine for images -> input word -> get word vector -> get images with similar word vectors. I apologize, I do not have the link for this paper.

  29. In practice, LeakyReLU is useful for smaller datasets.

  30. In neural networks, replace all the operations in forward function with their _ version, for example, replace + with add_ to perform operations in place and save GPU memory.

  31. Paper read: Wide residual networks. [https://arxiv.org/abs/1605.07146]. The fast.ai team got the first place on DAWN benchmark.

  32. Topic read: Stochastic weight average.

  33. Fastai train phase API.

  34. Paper read: LARS. [https://arxiv.org/abs/1708.03888]

  35. You can train the model with different optimizers during different training phases.

  36. You can break 7x7 filter to two 1x7 and 7x1 filter: linearly separable filters. This reduces computations. TODO: insert image.

  37. The very initial stage of the backbone network where the input channels say 3 are increased to higher numbers say 64 is called stem of the backbone network. Inception network stem is very good then other networks. One can try inception stem on the resnet main backbone.

  38. Paper read: Progressive growing of GANs. [https://arxiv.org/abs/1710.10196]

  39. The most interesting layers to grab output from are the ones before the max pooling layer because they represent the data best before the grid size changes.

I may have missed some points and there may be some mistakes. I haven't included any paper citations but all of the above points are from the MOOC and the papers presented in the MOOC.

@niazangels
Copy link

  1. This might be the paper you're looking for : DeViSE: A Deep Visual-Semantic Embedding Model

https://research.google.com/pubs/archive/41473.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment