Skip to content

Instantly share code, notes, and snippets.

View vardaan123's full-sized avatar

Vardaan Pahuja vardaan123

View GitHub Profile
def get_gpu_memory_map():
"""Get the current gpu usage.
Returns
-------
usage: dict
Keys are device ids as integers.
Values are memory usage as integers in MB.
"""
result = subprocess.check_output(
@vardaan123
vardaan123 / download_drive_file.py
Created May 31, 2018 20:18
download drive file
import requests
def download_file_from_google_drive(id, destination):
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
perl tools/tokenizer.perl -a -no-escape -l en -q < sample_sentences.txt > sample_sentences.atok
python translate.py -gpu 0 -model available_models/averaged-10-epoch.pt -src sample_sentences.atok -verbose -output sample_sentences.de.atok
@vardaan123
vardaan123 / gist:6028f0a5ea7b591870018aa81ea72fd6
Created May 20, 2018 19:57
Nvidia docker creation steps
docker pull nvcr.io/nvidia/pytorch:18.01-py3
nvidia-docker run -it --name sai nvcr.io/nvidia/pytorch:18.01-py3 /bin/bash

(Just try to import torch in python shell here). Exit with Ctrl+D.

docker commit -m test -a sai pytorch_sai nvcr.io/mila1234/pytorch_sai:1
docker push nvcr.io/mila1234/pytorch_sai:1
@vardaan123
vardaan123 / gist:5772abefb20194474dbad2c3aeb2e583
Last active May 12, 2018 02:17
Library Loading Tutorial

Source: #clusters by Olexa Bilanuik

Important Environment Variables

  • PATH: Ordered, colon (:)-separated list of directories. List of directories searched by the shell for executables to execute.
  • LD_LIBRARY_PATH: Ordered, colon (:)-separated list of directories. Contributes to, but is not the only, list of directories searched by the dynamic linker for libraries to load.

Execution of a Program in a Shell

  • If you execute a command cmd arg arg arg... in a shell, your shell will:
http://www.sixarm.com/about/java-install-openjdk-ant-ivy-on-ubuntu-linux.html
https://freethreads.wordpress.com/2012/07/22/pylucene-installation-on-ubuntu-12-04/
http://apache.forsale.plus/lucene/pylucene/
https://lucene.apache.org/pylucene/install.html
@vardaan123
vardaan123 / gpu_profile.py
Created March 2, 2018 05:30 — forked from MInner/gpu_profile.py
A script to generate per-line GPU memory usage trace. For more meaningful results set `CUDA_LAUNCH_BLOCKING=1`.
import datetime
import linecache
import os
import pynvml3
import torch
print_tensor_sizes = True
last_tensor_sizes = set()
gpu_profile_fn = f'{datetime.datetime.now():%d-%b-%y-%H:%M:%S}-gpu_mem_prof.txt'