-
-
Save misho-kr/1337d006a7233b6089d3ee3150d64794 to your computer and use it in GitHub Desktop.
ML/DL Resources
- Conv2D
# Batch size = B
# Image height = h
# Image width = w
# Color channels = c
# (B, h, w, c)
x = Input(shape=(h, w, c))
# Convolution height = ch
# Convolution width = cw
# Convolution filters = f
# (B, h - ch, w - cw, f) <- (B, h, w, c)
x = Conv2D(filters=f, kernel_size=(ch, cw), stride=(1, 1))(x)
- With
return_sequences=True
# Batch size = B
# Timesteps = t
# Features = f
# (B, t, f)
x = Input(shape=(t, f))
# (B, t, out) <- (B, t, f)
x = LSTM(out, return_sequences=True, dropout = 0.5)(x)
- With
return_sequences=False
# (B, t, f)
x = Input(shape=(t, f))
# (B, out) <- (B, t, f)
x = LSTM(out, return_sequences=False, dropout = 0.5)(x)
Before even diving into all this, I advise you to set up a conda environment where you do all your deep learning. Instructions for installing and using conda can be found here.
Useful when coding
Understanding the different networks you might want to code up or draw inspiration from
- Types of Deep Neural Networks (link)
- LSTMs (link)
- Understanding Attention (link)
- Attention mechanisms for decoders is sequence-to-sequence architectures in keras (link)
- Understanding Transformer Networks (link)
- Coding Transformer Networks in PyTorch (link)
Understanding/Looking-up basic math
- Probability Theory (link)
- Statistics (link)
- Linear Algebra (link - Wonderful videos by 3Blue1Brown to understand what operations in linear algebra mean with respect to the geometry of the space.
- Useful theorems (link)
Set-up
- Conda environment with cudnn, cuda, tensorflow-gpu setup (link) -- Downlaod and run
conda env create -f ml37.yml
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment