Skip to content

Instantly share code, notes, and snippets.

View InnovArul's full-sized avatar
💭
wandering around in the AI space

Arulkumar InnovArul

💭
wandering around in the AI space
View GitHub Profile
@InnovArul
InnovArul / nvidia_docker.sh
Created December 3, 2023 00:46
nvidia-docker installation
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

Answering https://discuss.pytorch.org/t/a-set-of-data-sum-as-the-dividend-how-to-find-grad/182303:

We can find $grad_x = \frac{dz}{dx}$ in the following way:

$$grad_x = \frac{dz}{dx} = \left[\frac{dz}{dx_0},\frac{dz}{dx_1}, \frac{dz}{dx_2}\right]$$

$$ = \frac{dz}{dx} = \left[\frac{dy}{dx_0}\frac{dz}{dy},\frac{dy}{dx_1}\frac{dz}{dy},\frac{dy}{dx_2}\frac{dz}{dy}\right]$$

Now let's concentrate on finding one element $\frac{dz}{dx_0} = \frac{dy}{dx_0}\frac{dz}{dy}$

from turtle import forward
import torch
import torch.nn as nn, torch.nn.functional as F
class CNN(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=8, kernel_size=6, stride=1, padding=2)
self.RL1 = nn.ReLU()
@InnovArul
InnovArul / multi_loss_optimizer.py
Created August 30, 2022 23:17
using multiple optimizers
import torch, torch.nn as nn
import torch.optim as optim
def print_grads(modules, string):
print(string)
for mod in modules:
for p in mod.parameters():
print(p.grad)
print('**')
print("-----")
@InnovArul
InnovArul / create_user.sh
Created February 17, 2022 05:29
create users - ubuntu
command:
--------
sudo useradd -m -d /home/<user> -s /bin/bash -c "<rollnumber>" -U <user>
password:
---------
sudo passwd <user>
Add user to sudo
-----------------
@InnovArul
InnovArul / linear_partial_freeze_no_weightdecay.py
Last active November 20, 2021 21:16
to freeze weights and avoid weight decay of frozen weights
import torch, torch.nn as nn
import torch.optim as optim, torch.nn.functional as F
class CustomLinearNoWeightDecay(nn.Module):
def __init__(self, mask):
super().__init__()
self.register_buffer("mask", mask)
out_channels, in_channels = mask.shape
self.weight = nn.Parameter(torch.randn(out_channels, in_channels))
import torch, torch.nn as nn
class LowlevelModule(nn.Module):
def __init__(self, custom_val):
super().__init__()
self.custom_val = custom_val
def print_custom_val(self):
print(self.custom_val.item())
@InnovArul
InnovArul / reset_params.py
Created November 18, 2021 20:03
deepcopy and reset params
import torchvision, copy
import torch, torch.nn as nn
def reset_all_weights(model: nn.Module) -> None:
"""
refs:
- https://discuss.pytorch.org/t/how-to-re-set-alll-parameters-in-a-network/20819/6
- https://stackoverflow.com/questions/63627997/reset-parameters-of-a-neural-network-in-pytorch
- https://pytorch.org/docs/stable/generated/torch.nn.Module.html
"""
@InnovArul
InnovArul / torch_extn.sh
Last active October 25, 2021 21:27
to build torch extension for all cuda architectures
TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python setup.py build
python setup.py install
D:\cmder_mini\Cmder.exe "%ActivDir%"
@InnovArul
InnovArul / video_lectures.sh
Created February 9, 2021 01:43
to preprocess dvp video lectures
mv Day.. day_ # move Day folder to day_ folder
# move MTS files to root dir
mv ./01/2019/* .
# to concate MTS files into MP4
ffmpeg -i "concat:$(echo *.MTS | tr ' ' '|')" -strict -2 concat_out.mp4