Skip to content

Instantly share code, notes, and snippets.

View pbamotra's full-sized avatar
🎯
Focusing

Pankesh Bamotra pbamotra

🎯
Focusing
View GitHub Profile
@pbamotra
pbamotra / dali-1.4.py
Created July 3, 2019 21:10
DALI Post-1.4
from os import listdir
from os.path import isfile, join
images_directory = './flower_data/flower_data_flat'
# read names of all image files
image_files = [f for f in listdir(image_dir) if isfile(join(image_dir, f))]
# we create a data frame with the image names and dummy labels - label_1, label_2
data = pd.DataFrame(list(zip(image_files,
list(range(len(image_files))),
@pbamotra
pbamotra / dali-1.3.sh
Created July 3, 2019 21:09
DALI Post-1.3
$ wget -cq https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip \
&& unzip -qq flower_data.zip \
&& mkdir -p ./flower_data/flower_data_flat \
&& find ./flower_data/train -mindepth 2 -type f -exec mv -t ./flower_data/flower_data_flat -i '{}' +
@pbamotra
pbamotra / dali-1.2.sh
Created July 3, 2019 21:07
DALI Post-1.2
# Find out the cuda version so that we install appropriate DALI binaries
# Find installation instructions at
# https://github.com/NVIDIA/DALI#installing-prebuilt-dali-packages
$ nvcc --version
# sample output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
@pbamotra
pbamotra / dali-1.1.py
Created July 3, 2019 21:00
DALI-Post-1.1
from torchvision import transforms
def get_image_transforms() -> transforms.Compose:
"""
These transformations meant for data augmentation are a bottleneck
since all the operations are done on CPU and then the tensors are
copied to the GPU device.
"""
return transforms.Compose([
transforms.RandomSizedCrop(224),
@pbamotra
pbamotra / sigsoftmax.py
Created April 10, 2019 04:08
Pytorch implementation of sigsoftmax - https://arxiv.org/pdf/1805.10829.pdf
def logsigsoftmax(logits):
"""
Computes sigsoftmax from the paper - https://arxiv.org/pdf/1805.10829.pdf
"""
max_values = torch.max(logits, 1, keepdim = True)[0]
exp_logits_sigmoided = torch.exp(logits - max_values) * torch.sigmoid(logits)
sum_exp_logits_sigmoided = exp_logits_sigmoided.sum(1, keepdim = True)
log_probs = logits - max_values + torch.log(torch.sigmoid(logits)) - torch.log(sum_exp_logits_sigmoided)
return log_probs
@pbamotra
pbamotra / pib_releases_today.sh
Last active February 28, 2019 06:54
Press releases from Press Information Bureau, India on Mac terminal
#!/bin/bash
# Mac instructions: -
# 0. Install brew -- https://brew.sh/
# ```/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"```
# 1. Install pup -- https://github.com/ericchiang/pup
# ```$ brew install https://raw.githubusercontent.com/EricChiang/pup/master/pup.rb```
# 2. Install jq -- https://stedolan.github.io/jq/
# ```$ brew install jq```
@pbamotra
pbamotra / Install NVIDIA Driver and CUDA.md
Created February 14, 2018 20:10 — forked from wangruohui/Install NVIDIA Driver and CUDA.md
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
@pbamotra
pbamotra / gtrends.sh
Last active January 23, 2018 00:30
Find Google trends business entities from command line
brew install jq && curl -sS 'https://trends.google.com/trends/api/stories/latest?hl=en-US&tz=480&cat=b&fi=15&fs=15&geo=US&ri=300&rs=15&sort=0' -H 'dnt: 1' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.9' -H 'accept: application/json, text/plain, */*' -H 'referer: https://trends.google.com/trends/home/b/US' -H 'authority: trends.google.com' --compressed > out.json && sed '1d' out.json > out2.json && jq '.storySummaries.trendingStories[].entityNames | join(", ")' out2.json | grep 'NYSE\|NASDAQ' | jq "."
@pbamotra
pbamotra / scrape_npr.py
Created December 6, 2017 23:14
Download <date><topic><news title> for NPR from Oct 29, 2019 through present
# -*- coding: utf-8 -*-
import os
import sys
import grequests
import numpy as np
import pandas as pd
from lxml import html
from time import sleep
from datetime import date, timedelta
@pbamotra
pbamotra / iclr18-ranking-by-replies.sh
Last active November 29, 2017 21:22
List of ICLR 2018 submissions ranked by number of replies
# requirements: curl, jq, python, pandas
# brew install jq
# pip install pandas
# prints top 50 papers
curl -sS "https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FBlind_Submission&offset=0&limit=5000" | jq -r '.notes[] | .content.title + "\t" + (.replyCount|tostring)' > subset.tsv && python -c 'from pprint import pprint; import pandas as pd; d=pd.read_csv("subset.tsv", header=None, delimiter="\t"); d.columns=["title", "replies"]; pprint(d.sort_values(by="replies", ascending=False).head(50).title.tolist());' && rm -f subset.tsv
# number of papers in top 50 on adversarial networks