Skip to content

Instantly share code, notes, and snippets.

View jramapuram's full-sized avatar
👨‍💻

Jason Ramapuram jramapuram

👨‍💻
View GitHub Profile
@jramapuram
jramapuram / torchaudio_libfix.patch
Created November 1, 2022 00:06
Fix torch audio library fix
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 696a736a..392145f0 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,5 +1,7 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
+link_directories(/miniconda/lib)
+
# Most of the configurations are taken from PyTorch
ViT(
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
(norm): Identity()
)
(backbone): xFormer(
(encoders): ModuleList(
(0): xFormerEncoderBlock(
(mha): MultiHeadDispatch(
(attention): ScaledDotProduct(
(pytorch1.10.0-py39) ➜ ~ pip install xformers
Collecting xformers
Using cached xformers-0.0.7.tar.gz (95 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: torch>=1.8.1 in /opt/homebrew/Caskroom/miniforge/base/envs/pytorch1.10.0-py39/lib/python3.9/site-packages (from xformers) (1.10.0)
Requirement already satisfied: numpy in /opt/homebrew/Caskroom/miniforge/base/envs/pytorch1.10.0-py39/lib/python3.9/site-packages (from xformers) (1.21.4)
Requirement already satisfied: pyre-extensions==0.0.23 in /opt/homebrew/Caskroom/miniforge/base/envs/pytorch1.10.0-py39/lib/python3.9/site-packages (from xformers) (0.0.23)
Requirement already satisfied: typing-inspect in /opt/homebrew/Caskroom/miniforge/base/envs/pytorch1.10.0-py39/lib/python3.9/site-packages (from pyre-extensions==0.0.23->xformers) (0.7.1)
Requirement already satisfied: typing-extensions in /opt/homebrew/Caskroom/miniforge/base/envs/pytorch1.10.0-py39/lib/python3.9/site-packages (from pyre-extensions==0.0.23->xformers) (4.0.0)
Re
wandb@35204a5e071a:~$ cat /var/log/gorilla-filemeta.log
{"level":"INFO","time":"2020-08-16T15:53:06.63101146Z","info":{"program":"gorilla-filemeta","source":"gorilla-filemeta/main.go:85","pid":66},"data":{"config":{"MetadataStore":"mysql://wandb_local:wandb_local@127.0.0.1:3306/wandb_local","FileMetadataSource":"redis://127.0.0.1:6379/filemetadata","FileStore":"s3://Ev7+1jhUDpUTKYTVlef0Jw==:Y6Q73vfco376TtxhZHa0rindiGim2icTYP1WVZmaFsg=@127.0.0.1:9000/local-files","FileStoreIsProxied":true,"FileHost":"http://localhost:8080","DataFrameStore":"noop://","TaskQueue":"noop://","Onprem":true,"Tracer":"noop://","Statsd":{"Host":"","Port":0},"SentryDSN":"","SentryEnvironment":"onprem-local","PProfAddr":":8080","GCPProject":"dev~wandb-local","GoogleApplicationCredentials":"","AzureAccountKey":"","MySQL":{"DialTimeout":"0s","ReadTimeout":"0s","WriteTimeout":"0s","MaxIdleConns":0,"MaxOpenConns":0,"ConnMaxLifetime":"0s"}}},"message":"Running with config {MetadataStore:mysql://wandb_local:wandb_local@127.0.0.1:3306/wandb_lo
2020-08-15 01:10:54,597 DEBUG MainThread:141411 [wandb_config.py:_load_defaults():154] no defaults not found in config-defaults.yaml
2020-08-15 01:10:54,747 DEBUG MainThread:141411 [meta.py:_setup_code_git():49] probe for git information
2020-08-15 01:10:54,986 DEBUG MainThread:141411 [meta.py:setup():104] code probe starting
2020-08-15 01:10:54,987 DEBUG MainThread:141411 [meta.py:_setup_code_program():58] save program starting
2020-08-15 01:10:54,987 DEBUG MainThread:141411 [meta.py:_setup_code_program():60] save program starting: /home/jramapuram/sshfs/kanerva_plus_plus/./main.py
2020-08-15 01:10:54,999 DEBUG MainThread:141411 [meta.py:_setup_code_program():68] save program saved: /home/jramapuram/sshfs/kanerva_plus_plus/wandb/run-20200815_011054-2kcpmyzj/code/main.py
2020-08-15 01:10:55,003 DEBUG MainThread:141411 [meta.py:_setup_code_program():70] save program
2020-08-15 01:10:55,036 DEBUG MainThread:141411 [meta.py:setup():124] code probe done
2020-08-15 01:10:55,084 DEBUG MainThread:1
ramapur0@gpu010:~$ ~/samples/bin/x86_64/linux/release/p2pBandwidthLatencyTest
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, TITAN Xp, pciBusID: 4, pciDeviceID: 0, pciDomainID:0
Device: 1, TITAN Xp, pciBusID: 5, pciDeviceID: 0, pciDomainID:0
Device: 2, TITAN Xp, pciBusID: 8, pciDeviceID: 0, pciDomainID:0
Device: 3, TITAN Xp, pciBusID: 9, pciDeviceID: 0, pciDomainID:0
Device: 4, TITAN Xp, pciBusID: 83, pciDeviceID: 0, pciDomainID:0
Device: 5, TITAN Xp, pciBusID: 84, pciDeviceID: 0, pciDomainID:0
Device: 6, TITAN Xp, pciBusID: 87, pciDeviceID: 0, pciDomainID:0
Device: 7, TITAN Xp, pciBusID: 88, pciDeviceID: 0, pciDomainID:0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0
train-0[Epoch 1][1280768 samples][849.67 sec]: Loss: 7.0388 Top-1: 0.1027 Top-5: 0.4965
test-0[Epoch 1][50176 samples][17.05 sec]: Loss: 6.9965 Top-1: 0.1016 Top-5: 0.4604
/home/jramapuram/.venv3/envs/pytorch1.5-py37/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been ov
erridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://py
torch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 65536.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
(base) ➜ ~ cat ~/.doom.d/config.el
;;; $DOOMDIR/config.el -*- lexical-binding: t; -*-
;; Place your private configuration here! Remember, you do not need to run 'doom
;; sync' after modifying this file!
;; Some functionality uses this to identify you, e.g. GPG configuration, email
;; clients, file templates and snippets.
(setq user-full-name "Jason Ramapuram"
(base) ➜ personal git:(master) ✗ cat prelude-modules.el
;;; Uncomment the modules you'd like to use and restart Prelude afterwards
;; Emacs IRC client
(require 'prelude-erc)
;; (require 'prelude-ido) ;; Super charges Emacs completion for C-x C-f and more
;; (require 'prelude-ivy) ;; A mighty modern alternative to ido
(require 'prelude-helm) ;; Interface for narrowing and search
(require 'prelude-helm-everywhere) ;; Enable Helm everywhere
@jramapuram
jramapuram / crop_lambda.py
Created August 6, 2018 20:17
crop lambda
import os
import time
import argparse
import threading
import multiprocessing
import numpy as np
import matplotlib.pyplot as plt
from cffi import FFI
from queue import Queue, Empty
from multiprocessing import Pool