Skip to content

Instantly share code, notes, and snippets.

Avatar

AruniRC AruniRC

View GitHub Profile
@AruniRC
AruniRC / histogram_spec_map.py
Created Jun 19, 2020
Histogram specification demo code
View histogram_spec_map.py
import os
import sys
import pickle
import json
import numpy as np
import sys
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
import os.path as osp
@AruniRC
AruniRC / bashrc_renyi
Last active Apr 19, 2020
Bashrc renyi server
View bashrc_renyi
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=yes=no
@AruniRC
AruniRC / bash_profile
Last active Aug 14, 2020
Bashrc macbook home
View bash_profile
export PS1="\[\033[36m\]\u\[\033[m\]@\[\033[32m\]\h:\[\033[33;1m\]\w\[\033[m\]\n\$ "
export CLICOLOR=1
export LSCOLORS=ExFxBxDxCxegedabagacad
# User defined aliases
alias ls='ls -GFh'
# Mounting remote drives (create folder manually first under ~/Mount/remote-name)
alias mount-fisher='sshfs arunirc@fisher.cs.umass.edu:/ ~/Mount/fisher -o volname=fisher'
@AruniRC
AruniRC / draw_networkx_graph.py
Created Jul 30, 2019
Adding edge thickness and node colors in NetworkX graph plotting
View draw_networkx_graph.py
# saliency
sal = cluster_saliency[cluster_label] # [ (grad-norm, grad-max)
grad_max = sal[1] / max(sal[1])
feat_vertices = features[cluster_ids, :]
adj_mat = get_adjmat(feat_vertices, is_norm_adj=False)
adj_mat_normed = get_adjmat(feat_vertices, is_norm_adj=True)
# create networkx graph from adjacency matrix
@AruniRC
AruniRC / context_heads_poincare.py
Last active Apr 24, 2019
Allow shifts and scales of Poincare distance which usually lies on the unit disc
View context_heads_poincare.py
import numpy as np
import itertools
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from torch.autograd import Variable
from torch.autograd import Function
from scipy.spatial.distance import pdist
View democratic_pooling.py
def forward(self, x):
x = self.features(x)
[bs, ch, h, w] = x.shape
x = x.view(bs, ch, -1).transpose(2, 1)
# x.register_hook(self.save_grad('x'))
# Gram Matrix NxN for the N input features "x"
K = x.bmm(x.transpose(2, 1))
K = x * x; # < --- IS THIS CORRECT for 1st order features????
@AruniRC
AruniRC / Vim_netrw_howto.md
Last active Feb 27, 2019
HOWTO and quick links to help me learn (and remember) Vim and Netrw as part of my workflow
@AruniRC
AruniRC / netrw.txt
Created Feb 27, 2019 — forked from danidiaz/netrw.txt
Vim's netrw commands.
View netrw.txt
--- ----------------- ----
Map Quick Explanation Link
--- ----------------- ----
< <F1> Causes Netrw to issue help
<cr> Netrw will enter the directory or read the file |netrw-cr|
<del> Netrw will attempt to remove the file/directory |netrw-del|
<c-h> Edit file hiding list |netrw-ctrl-h|
<c-l> Causes Netrw to refresh the directory listing |netrw-ctrl-l|
<c-r> Browse using a gvim server |netrw-ctrl-r|
<c-tab> Shrink/expand a netrw/explore window |netrw-c-tab|
@AruniRC
AruniRC / install_env_gypsum.md
Last active Jul 4, 2019
Setup conda environment for Detectron with PyTorch on Gypsum
View install_env_gypsum.md

This walkthrough describes setting up Detectron (3rd party pytorch implementation) and Graph Conv Net (GCN) repos on the UMass cluster Gypsum. Most commands are specific to that setting.

Gypsum environment

$ module list
Currently Loaded Modulefiles:
  1) slurm/16.05.8                         3) hdf5/1.6.10                           5) gcc5/5.4.0                            7) cudnn/5.1
  2) openmpi/gcc/64/1.10.1                 4) fftw2/openmpi/open64/64/float/2.1.5   6) cuda80/toolkit/8.0.61                 8) hdf5_18/1.8.17
@AruniRC
AruniRC / distill_loss.py
Created Sep 13, 2018
Pytorch distillation soft targets
View distill_loss.py
if self.distill:
soft_target = Variable(data[2].cuda())
distill_loss = torch.mean(torch.sum(- nn.Softmax(dim=1)(soft_target/self.T) * nn.LogSoftmax(dim=1)(out_data/self.T), 1))
loss += self.lbda*distill_loss
self.writer.add_scalar('train/distill_loss', distill_loss, i_acc+i+1)
You can’t perform that action at this time.