Skip to content

Instantly share code, notes, and snippets.

View AruniRC's full-sized avatar

AruniRC AruniRC

View GitHub Profile
@AruniRC
AruniRC / histogram_spec_map.py
Created June 19, 2020 17:19
Histogram specification demo code
import os
import sys
import pickle
import json
import numpy as np
import sys
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
import os.path as osp
@AruniRC
AruniRC / bashrc_renyi
Last active April 19, 2020 16:52
Bashrc renyi server
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=yes=no
@AruniRC
AruniRC / bash_profile
Last active August 14, 2020 22:46
Bashrc macbook home
export PS1="\[\033[36m\]\u\[\033[m\]@\[\033[32m\]\h:\[\033[33;1m\]\w\[\033[m\]\n\$ "
export CLICOLOR=1
export LSCOLORS=ExFxBxDxCxegedabagacad
# User defined aliases
alias ls='ls -GFh'
# Mounting remote drives (create folder manually first under ~/Mount/remote-name)
alias mount-fisher='sshfs arunirc@fisher.cs.umass.edu:/ ~/Mount/fisher -o volname=fisher'
@AruniRC
AruniRC / draw_networkx_graph.py
Created July 30, 2019 23:16
Adding edge thickness and node colors in NetworkX graph plotting
# saliency
sal = cluster_saliency[cluster_label] # [ (grad-norm, grad-max)
grad_max = sal[1] / max(sal[1])
feat_vertices = features[cluster_ids, :]
adj_mat = get_adjmat(feat_vertices, is_norm_adj=False)
adj_mat_normed = get_adjmat(feat_vertices, is_norm_adj=True)
# create networkx graph from adjacency matrix
@AruniRC
AruniRC / context_heads_poincare.py
Last active April 24, 2019 19:07
Allow shifts and scales of Poincare distance which usually lies on the unit disc
import numpy as np
import itertools
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from torch.autograd import Variable
from torch.autograd import Function
from scipy.spatial.distance import pdist
def forward(self, x):
x = self.features(x)
[bs, ch, h, w] = x.shape
x = x.view(bs, ch, -1).transpose(2, 1)
# x.register_hook(self.save_grad('x'))
# Gram Matrix NxN for the N input features "x"
K = x.bmm(x.transpose(2, 1))
K = x * x; # < --- IS THIS CORRECT for 1st order features????
@AruniRC
AruniRC / Vim_netrw_howto.md
Last active February 27, 2019 23:04
HOWTO and quick links to help me learn (and remember) Vim and Netrw as part of my workflow
@AruniRC
AruniRC / netrw.txt
Created February 27, 2019 03:06 — forked from danidiaz/netrw.txt
Vim's netrw commands.
--- ----------------- ----
Map Quick Explanation Link
--- ----------------- ----
< <F1> Causes Netrw to issue help
<cr> Netrw will enter the directory or read the file |netrw-cr|
<del> Netrw will attempt to remove the file/directory |netrw-del|
<c-h> Edit file hiding list |netrw-ctrl-h|
<c-l> Causes Netrw to refresh the directory listing |netrw-ctrl-l|
<c-r> Browse using a gvim server |netrw-ctrl-r|
<c-tab> Shrink/expand a netrw/explore window |netrw-c-tab|
@AruniRC
AruniRC / install_env_gypsum.md
Last active July 4, 2019 20:31
Setup conda environment for Detectron with PyTorch on Gypsum

This walkthrough describes setting up Detectron (3rd party pytorch implementation) and Graph Conv Net (GCN) repos on the UMass cluster Gypsum. Most commands are specific to that setting.

Gypsum environment

$ module list
Currently Loaded Modulefiles:
  1) slurm/16.05.8                         3) hdf5/1.6.10                           5) gcc5/5.4.0                            7) cudnn/5.1
 2) openmpi/gcc/64/1.10.1 4) fftw2/openmpi/open64/64/float/2.1.5 6) cuda80/toolkit/8.0.61 8) hdf5_18/1.8.17
@AruniRC
AruniRC / distill_loss.py
Created September 13, 2018 02:43
Pytorch distillation soft targets
if self.distill:
soft_target = Variable(data[2].cuda())
distill_loss = torch.mean(torch.sum(- nn.Softmax(dim=1)(soft_target/self.T) * nn.LogSoftmax(dim=1)(out_data/self.T), 1))
loss += self.lbda*distill_loss
self.writer.add_scalar('train/distill_loss', distill_loss, i_acc+i+1)