Skip to content

Instantly share code, notes, and snippets.

View akshaychawla's full-sized avatar

Akshay Chawla akshaychawla

  • Vicarious Inc.
  • working from home
View GitHub Profile
@akshaychawla
akshaychawla / cscheduler.py
Last active December 22, 2023 11:56
Learning rate schedulers for PyTorch. (1) Cosine annealing with warmup and (2) Linear with warmup
"""
Useful learning rate schedulers
Warmup
CosineAnnealingLRWarmup
"""
import torch
import math
import functools
def _cosine_decay_warmup(iteration, warmup_iterations, total_iterations):
@akshaychawla
akshaychawla / auto_gpu.sh
Last active June 30, 2022 06:13
For those of us who forget to set CUDA_VISIBLE_DEVICES
if [ "${CUDA_VISIBLE_DEVICES}" = "auto" ]
then
# number of gpus
NUMGPUS=`nvidia-smi -q -d MEMORY | grep "Attached GPU" | grep -P -o "\d"`
echo "NUMGPUS: $NUMGPUS"
# extract free-memory for each gpu
MEMLIST="ID FREEMEM"
for (( DEVICE=0; DEVICE<${NUMGPUS}; DEVICE++ ))
do
@akshaychawla
akshaychawla / funky_lambda.py
Created November 22, 2017 05:44
Lambda layer with multiple inputs in Keras.
import numpy as np
from keras.models import Model
from keras.layers import Dense, Activation, Lambda, Input
import keras.backend as K
from keras.utils import to_categorical
# Model definition
def foo(ip):
a = ip[1]
x = ip[0]