Skip to content

Instantly share code, notes, and snippets.

View tejaskhot's full-sized avatar
🎯
Focusing

Tejas Khot tejaskhot

🎯
Focusing
View GitHub Profile
@rreas
rreas / spkmeans.m
Created June 2, 2012 17:55
Spherical K-Means Clustering
function [U,V,idx] = spkmeans(X,k,tol,imax)
[d,n] = size(X);
U = zeros(d,k);
V = zeros(k,n);
% random clusters and normalize to unit sphere.
for j = 1:n
V(randi(k),j) = 1;
@KWMalik
KWMalik / interviewitems.MD
Created September 16, 2012 22:04 — forked from amaxwell01/interviewitems.MD
My answers to over 100 Google interview questions

##Google Interview Questions: Product Marketing Manager

  • Why do you want to join Google? -- Because I want to create tools for others to learn, for free. I didn't have a lot of money when growing up so I didn't get access to the same books, computers and resources that others had which caused money, I want to help ensure that others can learn on the same playing field regardless of their families wealth status or location.
  • What do you know about Google’s product and technology? -- A lot actually, I am a beta tester for numerous products, I use most of the Google tools such as: Search, Gmaill, Drive, Reader, Calendar, G+, YouTube, Web Master Tools, Keyword tools, Analytics etc.
  • If you are Product Manager for Google’s Adwords, how do you plan to market this?
  • What would you say during an AdWords or AdSense product seminar?
  • Who are Google’s competitors, and how does Google compete with them? -- Google competes on numerous fields: --- Search: Baidu, Bing, Duck Duck Go
@tsiege
tsiege / The Technical Interview Cheat Sheet.md
Last active June 12, 2024 03:08
This is my technical interview cheat sheet. Feel free to fork it or do whatever you want with it. PLEASE let me know if there are any errors or if anything crucial is missing. I will add more links soon.

ANNOUNCEMENT

I have moved this over to the Tech Interview Cheat Sheet Repo and has been expanded and even has code challenges you can run and practice against!






\

@kevin-keraudren
kevin-keraudren / volume_rendering.py
Last active December 8, 2023 17:10
Volume rendering in Python using VTK-SimpleITK
#!/usr/bin/python
import SimpleITK as sitk
import vtk
import numpy as np
import sys
from vtk.util.vtkConstants import *
filename = sys.argv[1]
@debasishg
debasishg / gist:b4df1648d3f1776abdff
Last active January 20, 2021 12:15
another attempt to organize my ML readings ..
  1. Feature Learning
  1. Deep Learning
@karpathy
karpathy / gist:587454dc0146a6ae21fc
Last active June 7, 2024 05:09
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@myungsub
myungsub / iccv2015.md
Last active May 17, 2017 10:23
upload candidates to awesome-deep-vision

Vision & Language

  • Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images

    • Mateusz Malinowski, Marcus Rohrbach, Mario Fritz
  • Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

    • Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
  • Learning Query and Image Similarities With Ranking Canonical Correlation Analysis

  • Wah Ngo

@saliksyed
saliksyed / autoencoder.py
Created November 18, 2015 03:30
Tensorflow Auto-Encoder Implementation
""" Deep Auto-Encoder implementation
An auto-encoder works as follows:
Data of dimension k is reduced to a lower dimension j using a matrix multiplication:
softmax(W*x + b) = x'
where W is matrix from R^k --> R^j
A reconstruction matrix W' maps back from R^j --> R^k

Interactive Machine Learning

Taught by Brad Knox at the MIT Media Lab in 2014. Course website. Lecture and visiting speaker notes.

@kashif
kashif / cem.md
Last active November 7, 2023 12:56
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.