Skip to content

Instantly share code, notes, and snippets.

View vgoklani's full-sized avatar

Vishal Goklani vgoklani

View GitHub Profile
@vgoklani
vgoklani / cvdiff.py
Created August 23, 2012 14:49 — forked from tobigue/cvdiff.py
Sklearn GridSearchCV vs. CrossValidation
from sklearn.linear_model import SGDClassifier
from sklearn import cross_validation
from sklearn import metrics
from sklearn.grid_search import GridSearchCV
from sklearn.datasets import load_iris
data = load_iris()
sample_vector = data.data
targets = data.target
@vgoklani
vgoklani / download_remote_dir.py
Created September 18, 2012 18:51 — forked from jseabold/download_remote_dir.py
walk and download a remote http file server
import re
import os
import urllib2
import urlparse
import pycurl
from lxml import html
url = "http://some.remote.org/"
path = "path/to/dir/listing/"
local = '/path/to/local/parent/directory/'
@vgoklani
vgoklani / plot_skies.R
Created November 20, 2012 13:28 — forked from cjbayesian/plot_skies.R
Observing Dark Worlds visualization
################## Plot training skies ###################
##
## corey.chivers@mail.mcgill.ca
##
##########################################################
## calculate a vector given
## x,y,e1,e2
gal_line<-function(g,scale=100)
{
# Simulator for the simple Boltzmann machine of Coursera NN Lecture 11e
# Christian Jauvin - cjauvin@gmail.com
from collections import defaultdict
import numpy as np
# weights
w_v1_h1 = 2
w_h1_h2 = -1
w_h2_v2 = 1
# Simulator for the simple Boltzmann machine of Coursera NN Lecture 11e
# Christian Jauvin - cjauvin@gmail.com
from collections import defaultdict
import numpy as np
# weights
w_v1_h1 = 2
w_h1_h2 = -1
w_h2_v2 = 1
# Simulator for the simple Boltzmann machine of Coursera NN Lecture 11e
# Christian Jauvin - cjauvin@gmail.com
from collections import defaultdict
import numpy as np
# weights
w_v1_h1 = 2
w_h1_h2 = -1
w_h2_v2 = 1
@vgoklani
vgoklani / astrofrog.py
Created December 1, 2012 02:32 — forked from dfm/astrofrog.py
Stochastic gradient descent example
import numpy as np
N = 5000
I_true = np.random.randn(N)
D = I_true[:, None] - I_true[None, :]
I0 = np.zeros_like(I_true)
eta = 1.0 / N # Learning rate.
tol = 1.25e-11 # Error tolerance.
@vgoklani
vgoklani / mlp.py
Created December 1, 2012 06:27 — forked from amueller/mlp.py
Multi-Layer Perceptron for scikit-learn with SGD in Python
import numpy as np
import warnings
from itertools import cycle, izip
from sklearn.utils import gen_even_slices
from sklearn.utils import shuffle
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelBinarizer
@vgoklani
vgoklani / mnist_kernel_approximation.py
Created December 1, 2012 06:28 — forked from amueller/mnist_kernel_approximation.py
mnist kernel approximation
# Standard scientific Python imports
import pylab as pl
import numpy as np
from time import time
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, pipeline
from sklearn.kernel_approximation import (RBFSampler,
Nystroem)
from sklearn.utils import shuffle
@vgoklani
vgoklani / ml_with_sklearn_notebook.ipynb
Created December 1, 2012 06:29 — forked from amueller/ml_with_sklearn_notebook.ipynb
Teaser on machine learning with scikit-learn
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.