Skip to content

Instantly share code, notes, and snippets.

View EMFS's full-sized avatar

EMFS EMFS

View GitHub Profile
@pbugnion
pbugnion / ipython_notebook_in_git.md
Last active October 22, 2023 12:25
Keeping IPython notebooks under Git version control

This gist lets you keep IPython notebooks in git repositories. It tells git to ignore prompt numbers and program outputs when checking that a file has changed.

To use the script, follow the instructions given in the script's docstring.

For further details, read this blogpost.

The procedure outlined here is inspired by this answer on Stack Overflow.

@jqtrde
jqtrde / modern-geospatial-python.md
Last active August 1, 2023 14:50
Modern remote sensing image processing with Python
@tcvieira
tcvieira / setupFastaiV1.md
Last active October 12, 2021 13:10
Setup Fast.ai v1 on Paperspace Fast.ai Template

Setup Fastai v1 on Paperspace

Machine

  • Create a Fast.ai machine from public templates w/ P4000 and public IP

Connect to the machine

  • $ source deactivate fastai
  • $ pip install virtualenv
@bhavikngala
bhavikngala / fast_ai_mooc_important_points.md
Last active January 6, 2023 23:04
This gist contains a list of important points from fast.ai "practical deep learning for coders" and "cutting edge deep learning for coders" MOOC

This gist contains a list of points I found very useful while watching the fast.ai "Practical deep learning for coders" and "Cutting edge deep learning for coders" MOOC by Jeremy Howard and team. This list may not be complete as I watched the video at 1.5x speed on marathon but I did write down as many things I found to be very useful to get a model working. A fair warning the points are in no particular order, you may find the topics are all jumbled up.

Before beginning, I want to thank Jeremy Howard, Rachel Thomas, and the entire fast.ai team in making this awesome practically oriented MOOC.

  1. Progressive image resolution training: Train the network on lower res first and then increase the resolution to get better performance. This can be thought of as transfer learning from the same dataset but at a different resolution. There is one paper by NVIDIA as well that used such an approach to train GANs.

  2. Cyclical learning rates: Gradually increasing the learning rate initially helps to avoid getting stuc