Skip to content

Instantly share code, notes, and snippets.

View domarps's full-sized avatar
📱

Pramod Srinivasan domarps

📱
View GitHub Profile
import java.util.ArrayList;
import java.util.*;
public class HelloWorld{
static List<String> list = new ArrayList<String>();
static String[] generate_all_expressions(String s, long target)
{
expressionRecurse("",0,0,0,target,s);
// System.out.println("list:" + list.size());
String[] result =new String[list.size()];
int j =0;
prev = None
def convert(node,head,entered):
global prev
if node != None:
convert(node.left, head,entered)
node.left = prev
if prev == None:
head[0] = node
else:
import java.util.*;
public class GenerateSubsets {
/*
* Given a input array of size n elements find all the subsets.
*/
public static String getString(char[] subset, int subStIndex){
@domarps
domarps / kmeans.ipynb
Created May 13, 2018 21:17 — forked from jakevdp/kmeans.ipynb
Performance Python: 7 Strategies for Optimizing Your Numerical Code (PyCon 2018)
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
def batchnorm_forward(x, gamma, beta, bn_param):
"""
Forward pass for batch normalization.
During training the sample mean and (uncorrected) sample variance are
computed from minibatch statistics and used to normalize the incoming data.
During training we also keep an exponentially decaying running mean of the
mean and variance of each feature, and these averages are used to normalize
data at test-time.
At each timestep we update the running averages for mean and variance using
an exponential decay based on the momentum parameter:
def layernorm_forward(x, gamma, beta, ln_param):
"""
Forward pass for layer normalization.
During both training and test-time, the incoming data is normalized per data-point,
before being scaled by gamma and beta parameters identical to that of batch normalization.
Note that in contrast to batch normalization, the behavior during train and test-time for
layer normalization are identical, and we do not need to keep track of running averages
of any sort.
#FC1 activation
z1 = np.dot(X,W1) + b1 # N X H
# RELU layer
h1 = z1.copy() # N X H
h1[h1 < 0] = 0 # N X H
#FC2 layer
z2 = np.dot(h1, W2) + b2 # N X C
@domarps
domarps / knights_tour.py
Created April 7, 2018 22:26
knight's tour on a chessboard
import queue
class knightsTour:
def __init__(self):
pass
def is_within_limits(self, i, j, M, N):
'''
:param i
:param j
:param M
@domarps
domarps / ImportModulesNotebook.ipynb
Created March 7, 2018 17:19 — forked from Breta01/ImportModulesNotebook.ipynb
Example of importing multiple TensorFlow modules
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@domarps
domarps / file-remover.py
Created August 10, 2017 22:33
snippet to remove files
def remove_blacklisted_lines():
with open('some.csv') as oldfile, open('some.csv', 'w') as newfile:
for line in oldfile:
ss = line.split('_')
if not any(t in ss[-2] for t in remove_urls):
newfile.writr(line)
import glob
import os