Skip to content

Instantly share code, notes, and snippets.

View tejaskhot's full-sized avatar
🎯
Focusing

Tejas Khot tejaskhot

🎯
Focusing
View GitHub Profile
@tejaskhot
tejaskhot / perceptron.py
Last active August 29, 2015 14:14
Perceptron Learning Algorithm
import numpy as np
import random
unit_step = lambda x: 0 if x < 0 else 1
## create dummy data
training_data = [ (np.array([0,0,1]), 0), (np.array([0,1,1]), 1), (np.array([1,0,1]), 1), (np.array([1,1,1]), 1), ]
w = np.random.rand(3)
errors = []
learning_rate = 0.2
import numpy as np
import theano
import theano.tensor as T
from theano import function, config, shared, sandbox
from theano import ProfileMode
import warnings
warnings.filterwarnings("ignore")
# Dummy Data
@tejaskhot
tejaskhot / install_ffmpeg.sh
Created July 6, 2015 03:50
Installing ffmpeg-2.4.2 on Ubuntu 14.04
sudo apt-add-repository ppa:samrog131/ppa
sudo apt-get update
sudo apt-get install ffmpeg-real
#very important
#create a symbolic link
sudo ln -sf /opt/ffmpeg/bin/ffmpeg /usr/bin/ffmpeg
@tejaskhot
tejaskhot / ffmpeg_error_resolve
Created July 16, 2015 11:28
ffmpeg: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory
In order to resolve this, we need to create a soft link between two files.
Firstly, go to binstar.com and install the ffmpeg conda package(recent most).
This will create a libssl1.1.0 file in /anaconda/lib/
Navigate to /usr/lib/ and notice that libssl file does not exist there
We need to copy the file there
sudo cp /anaconda/lib/libssl1.1.0 /usr/lib/
But the error says that libssl.so.10 is not found.
We need to create a soft link like this:
@tejaskhot
tejaskhot / git_extract_commits.sh
Last active September 14, 2015 07:13 — forked from xinan/git_extract_commits.sh
This is a simple shell script to extract commits by a specified author in a git repository and format them as numbered patches. It is useful for GSoC code submission.
if ! [[ $# -eq 1 || $# -eq 2 || $# -eq 4 ]]; then
echo "Usage: $0 <author> [<start_date> <end_date>] [output_dir]"
echo "Example: $0 xinan@me.com 2015-05-25 2015-08-21 ./patches"
exit
fi
author=$1
if [ $# -gt 3 ]; then
output_dir=$4
@tejaskhot
tejaskhot / interviewitems.MD
Created September 3, 2016 12:41 — forked from KWMalik/interviewitems.MD
My answers to over 100 Google interview questions

##Google Interview Questions: Product Marketing Manager

  • Why do you want to join Google? -- Because I want to create tools for others to learn, for free. I didn't have a lot of money when growing up so I didn't get access to the same books, computers and resources that others had which caused money, I want to help ensure that others can learn on the same playing field regardless of their families wealth status or location.
  • What do you know about Google’s product and technology? -- A lot actually, I am a beta tester for numerous products, I use most of the Google tools such as: Search, Gmaill, Drive, Reader, Calendar, G+, YouTube, Web Master Tools, Keyword tools, Analytics etc.
  • If you are Product Manager for Google’s Adwords, how do you plan to market this?
  • What would you say during an AdWords or AdSense product seminar?
  • Who are Google’s competitors, and how does Google compete with them? -- Google competes on numerous fields: --- Search: Baidu, Bing, Duck Duck Go
@tejaskhot
tejaskhot / cem.md
Created November 18, 2017 19:03 — forked from kashif/cem.md
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

@tejaskhot
tejaskhot / theano_shape.markdown
Created June 13, 2015 06:40
Getting shape and size information for a Theano SharedVariable

You can get the value of a shared variable like this:

w.get_value()

Then this would work:

w.get_value().shape

But this will copy the shared variable content. To remove the copy you can use the borrow parameter like this:

from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
def sample_gumbel(shape, eps=1e-20):
U = torch.rand(shape).cuda()
return -Variable(torch.log(-torch.log(U + eps) + eps))
@tejaskhot
tejaskhot / image_montage
Created August 11, 2018 00:23
Generating a horizontal image montage
Source:
https://stackoverflow.com/questions/2853334/glueing-tile-images-together-using-imagemagicks-montage-command-without-resizin
montage -mode concatenate -tile 12x *.jpg out.jpg