Skip to content

Instantly share code, notes, and snippets.

View rish-16's full-sized avatar
🤖
backpropagating through memories

Rishabh Anand rish-16

🤖
backpropagating through memories
View GitHub Profile
@rish-16
rish-16 / datagen.py
Created May 29, 2021 06:20
A guide on Colab TPU training using PyTorch XLA (Part 6)
'''
num_replicas is the total number of times we'll replicate
the batch samples for all cores.
'''
train_sampler = torch.utils.data.distributed.DistributedSampler(
im_train,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True
)
@rish-16
rish-16 / dataset_dwnld.py
Last active May 29, 2021 06:15
A guide on Colab TPU training using PyTorch XLA (Part 5)
# add your custom transforms and augmentations
T = transforms.Compose([
transforms.ToTensor(),
...
])
# instructing the master node to download the dataset only ONCE
if not xm.is_master_ordinal():
xm.rendezvous('download_only_once')
@rish-16
rish-16 / mapfn.py
Created May 29, 2021 06:08
A guide on Colab TPU training using PyTorch XLA (Part 4)
def map_fn(index, flags):
'''
Contains the following:
- data preprocessing
- model instantiation
- training loop
- validation (optional)
- testing
NOTE: all code beyond this belong inside this unless otherwise stated
@rish-16
rish-16 / model.py
Created May 29, 2021 05:54
A guide on Colab TPU training using PyTorch XLA (Part 3)
class MyCustomNet(nn.Module):
def __init__(self, myparams):
super().__init__()
# define layers
...
def forward(self, x):
'''
Pass your inputs through your layers as you normally would.
@rish-16
rish-16 / pytorch_imports.py
Created May 29, 2021 05:48
A guide on Colab TPU training using PyTorch XLA (Part 2)
# download and install PyTorch XLA
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl
# basic torch sub-modules (feel free to add on [eg: einops, time, random, etc.])
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
@rish-16
rish-16 / check_tpu.py
Created May 29, 2021 05:41
A guide on Colab TPU training using PyTorch XLA (Part 1)
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
@rish-16
rish-16 / cs2040s_w12_l1.md
Created April 6, 2021 09:45
Notes from CS2040S Week 12 Lecture 1 on MSTs

CS2040s Week 12 Lecture 1

Notes from CS2040S Week 12 Lecture 21 on Minimum Spanning Trees

MST

  • A spanning tree with minimum weight
  • Covers all nodes once
  • No cycles
  • If you cut an MST, both pieces are MSTs
  • For every cycle, max weight edge is not in the MST
  • For ever cut D, the minimum weight edge that crosses the cut is in the MST
@rish-16
rish-16 / cs2040s_w10_l2.md
Created March 25, 2021 11:30
Notes from CS2040S Week 10 Lecture 2 on DAGs

CS2040s Week 10 Lecture 2

Notes from CS2040S Week 10 Lecture 2 on Directed Acyclic Graphs

Directed Acyclic Graphs

  • Every edge has a direction between nodes
  • No cycles in the graphs

Topological Ordering

  • Ordering of nodes that respects the edges
@rish-16
rish-16 / cs2040s_w10_l1.md
Created March 23, 2021 09:55
Notes from CS2040S Week 10 Lecture 1 on Shortest Path Algorithms

CS2040s Week 10 Lecture 1

Notes from CS2040S Week 10 Lecture 1 on Shortest Path Algorithms

Edge Weights

  • Edges have weights on them that represent cost of traversal
    • Could be distance, money, energy needed
  • Shortest path isn't necessarily the shortest
    • It's the path with the minimal total weight

Questions

@rish-16
rish-16 / cs2040s_w9_l2.md
Created March 18, 2021 09:02
Notes from CS2040S Week 9 Lecture 2 on Graphs

CS2040S Week 9 Lecture 2

Notes from Week 9 Lecture 2 on Graphs

BFS

Properties of parent edges

  • They form a tree (shortest path tree)
    • There are no cycles – all nodes visited only once
  • Parent edges are always shortest paths from starting node s
  • For now, all edges have weight/distance 1