Skip to content

Instantly share code, notes, and snippets.

View jramapuram's full-sized avatar
👨‍💻

Jason Ramapuram jramapuram

👨‍💻
View GitHub Profile
@jramapuram
jramapuram / logs.sh
Created March 7, 2017 13:50
error_uchroma
Preparing...
Resolving dependencies...
Checking inter-conflicts...
Downloading...
Downloading python-wrapt-1.10.8-2-x86_64.pkg.tar.xz...
Downloading python2-pyparsing-2.1.10-2-any.pkg.tar.xz...
Downloading python2-six-1.10.0-3-any.pkg.tar.xz...
Downloading python2-packaging-16.8-2-any.pkg.tar.xz...
Downloading python2-appdirs-1.4.2-1-any.pkg.tar.xz...
Downloading python2-setuptools-1:34.3.1-1-any.pkg.tar.xz...
python3 setup.py install --root=/
/bin/sh: svnversion: command not found
/bin/sh: svnversion: command not found
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
int exp (void);
^~~
_configtest.o: In function `main':
/tmp/easy_install-u8gdhnd1/numpy-1.12.1rc1/_configtest.c:6: undefined reference to `exp'
collect2: error: ld returned 1 exit status
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
@jramapuram
jramapuram / build_logs
Created March 12, 2017 22:39
build_logs
/bin/sh: svnversion: command not found
/bin/sh: svnversion: command not found
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
int exp (void);
^~~
_configtest.o: In function `main':
/tmp/easy_install-pviwwl5b/numpy-1.12.1rc1/_configtest.c:6: undefined reference to `exp'
collect2: error: ld returned 1 exit status
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
int exp (void);
python setup.py install
torch.__version__ = 0.5.0a0+0df84d7
Found CUDA_HOME = /opt/cuda
Found NVCC = /opt/cuda/bin/nvcc
Found CUDA_LIB = 9.1.85
Found CUDA_MAJOR = 9
running install
running bdist_egg
running egg_info
creating apex.egg-info
@jramapuram
jramapuram / sh
Created August 6, 2018 19:10
parallel_image_crop
```bash
(base) ➜ parallel_image_crop git:(master) cargo build --release
Updating registry `https://github.com/rust-lang/crates.io-index`
Downloading libc v0.2.43
Compiling version_check v0.1.4
Compiling nodrop v0.1.12
Compiling cfg-if v0.1.4
Compiling memoffset v0.2.1
Compiling scopeguard v0.3.3
Compiling num-traits v0.2.5
@jramapuram
jramapuram / crop_lambda.py
Created August 6, 2018 20:17
crop lambda
import os
import time
import argparse
import threading
import multiprocessing
import numpy as np
import matplotlib.pyplot as plt
from cffi import FFI
from queue import Queue, Empty
from multiprocessing import Pool
(base) ➜ personal git:(master) ✗ cat prelude-modules.el
;;; Uncomment the modules you'd like to use and restart Prelude afterwards
;; Emacs IRC client
(require 'prelude-erc)
;; (require 'prelude-ido) ;; Super charges Emacs completion for C-x C-f and more
;; (require 'prelude-ivy) ;; A mighty modern alternative to ido
(require 'prelude-helm) ;; Interface for narrowing and search
(require 'prelude-helm-everywhere) ;; Enable Helm everywhere
(base) ➜ ~ cat ~/.doom.d/config.el
;;; $DOOMDIR/config.el -*- lexical-binding: t; -*-
;; Place your private configuration here! Remember, you do not need to run 'doom
;; sync' after modifying this file!
;; Some functionality uses this to identify you, e.g. GPG configuration, email
;; clients, file templates and snippets.
(setq user-full-name "Jason Ramapuram"
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0
train-0[Epoch 1][1280768 samples][849.67 sec]: Loss: 7.0388 Top-1: 0.1027 Top-5: 0.4965
test-0[Epoch 1][50176 samples][17.05 sec]: Loss: 6.9965 Top-1: 0.1016 Top-5: 0.4604
/home/jramapuram/.venv3/envs/pytorch1.5-py37/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been ov
erridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://py
torch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 65536.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
ramapur0@gpu010:~$ ~/samples/bin/x86_64/linux/release/p2pBandwidthLatencyTest
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, TITAN Xp, pciBusID: 4, pciDeviceID: 0, pciDomainID:0
Device: 1, TITAN Xp, pciBusID: 5, pciDeviceID: 0, pciDomainID:0
Device: 2, TITAN Xp, pciBusID: 8, pciDeviceID: 0, pciDomainID:0
Device: 3, TITAN Xp, pciBusID: 9, pciDeviceID: 0, pciDomainID:0
Device: 4, TITAN Xp, pciBusID: 83, pciDeviceID: 0, pciDomainID:0
Device: 5, TITAN Xp, pciBusID: 84, pciDeviceID: 0, pciDomainID:0
Device: 6, TITAN Xp, pciBusID: 87, pciDeviceID: 0, pciDomainID:0
Device: 7, TITAN Xp, pciBusID: 88, pciDeviceID: 0, pciDomainID:0