Skip to content

Instantly share code, notes, and snippets.

View reddragon's full-sized avatar
🤖
Too much to do, too little time.

Gaurav Menghani reddragon

🤖
Too much to do, too little time.
View GitHub Profile
@reddragon
reddragon / libevent-demo.cpp
Last active November 24, 2023 10:32
A demo for libevent usage
#include <iostream>
#include <cstdio>
#include <vector>
#include <event2/event.h>
#include <glog/logging.h>
#include <cassert>
#include <string>
#include <cstring>
#include <sys/socket.h>
#include <netinet/in.h>
@reddragon
reddragon / Wordcount.java
Created January 29, 2012 01:22
Hadoop Word Count Example
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
@reddragon
reddragon / logs.sh
Created March 6, 2012 06:41
Use it for finding the total CPU time taken by tasks on Hadoop, after making the change.
#!/bin/bash
LOGS_PATH="logs/userlogs/"
declare JOB=`ls $LOGS_PATH`;
JOB_PATH="$LOGS_PATH$JOB";
COMMAND="ls -1 --hide=job* $JOB_PATH/";
declare RET=`$COMMAND`;
`rm cputime.out`;
`touch cputime.out`;
for TASK in $RET
do
@reddragon
reddragon / transfer-learning.md
Created April 2, 2018 00:42
Transfer Learning Papers

How transferable are features in deep neural networks? - Yosinski et al.

  • Transfer Learning

    • Train on a base network, try to take that network and tweak it to work for a new target network.
    • Notes from CS231N.
  • Tries to figure out how much information can we transfer between networks trained on different datasets.

  • Quantifies the transferability by layer.

  • Hypothesis:

  • First few layers are general (Gabor Filters kind of features) and can adapt well.

Results on MNIST

Feed Forward model with two hidden layers (300, 60).

l2_lambda Accuracy@1 after 80k iters (Two Runs)
0.00 98.15, 98.04
0.01 98.31, 98.19
0.02 98.19, 98.15
0.04 97.93, 97.92
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.optim as optim
from torch.autograd import Variable
@reddragon
reddragon / struct.cpp
Last active September 12, 2017 18:30
Set struct members inline
#include <iostream>
using namespace std;
struct Foo {
int a;
double b;
};
int main() {
const Foo f = {
@reddragon
reddragon / frozen-lake-nn.py
Created June 12, 2017 23:11
Frozen Lake NN Implementation
import gym
import logging
import sys
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import cPickle as pickle
import os
@reddragon
reddragon / frozen-lake-iterative.py
Created June 11, 2017 17:12
Frozen Lake solved using the Q-Learning algorithm with an actual Q-value table
import gym
import logging
import sys
import numpy as np
from gym import wrappers
SEED = 0
NUM_EPISODES = 3000
# Hyperparams
@reddragon
reddragon / script.py
Created May 29, 2017 07:19
Get all of those graduation pics
import os
import sys
import urllib2
def normalize_path(path):
if path[-1] == '/':
path = path[:-1]
return path
def get_dir_name(path):