Skip to content

Instantly share code, notes, and snippets.

View tejaskhot's full-sized avatar
🎯
Focusing

Tejas Khot tejaskhot

🎯
Focusing
View GitHub Profile
import faiss
# example data
xb = np.random.rand(200000, 32).astype('float32')
xq = np.random.rand(500, 32).astype('float32')
# get reference result with index
index = faiss.IndexFlatL2(xb.shape[1])
index.add(xb)
lims, D, I = index.range_search(xq, 1.5)
@tejaskhot
tejaskhot / README-Template.md
Created September 6, 2018 13:30 — forked from PurpleBooth/README-Template.md
A template to make good README.md

Project Title

One Paragraph of project description goes here

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.

Prerequisites

@tejaskhot
tejaskhot / Chamfer_Distance_Pytorch.py
Created August 10, 2018 19:30 — forked from WangZixuan/Chamfer_Distance_Pytorch.py
Use Pytorch to calculate Chamfer distance
import torch
def chamfer_distance_without_batch(p1, p2, debug=False):
'''
Calculate Chamfer Distance between two point sets
:param p1: size[1, N, D]
:param p2: size[1, M, D]
:param debug: whether need to output debug info
@tejaskhot
tejaskhot / cem.md
Created November 18, 2017 19:03 — forked from kashif/cem.md
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

@tejaskhot
tejaskhot / interviewitems.MD
Created September 3, 2016 12:41 — forked from KWMalik/interviewitems.MD
My answers to over 100 Google interview questions

##Google Interview Questions: Product Marketing Manager

  • Why do you want to join Google? -- Because I want to create tools for others to learn, for free. I didn't have a lot of money when growing up so I didn't get access to the same books, computers and resources that others had which caused money, I want to help ensure that others can learn on the same playing field regardless of their families wealth status or location.
  • What do you know about Google’s product and technology? -- A lot actually, I am a beta tester for numerous products, I use most of the Google tools such as: Search, Gmaill, Drive, Reader, Calendar, G+, YouTube, Web Master Tools, Keyword tools, Analytics etc.
  • If you are Product Manager for Google’s Adwords, how do you plan to market this?
  • What would you say during an AdWords or AdSense product seminar?
  • Who are Google’s competitors, and how does Google compete with them? -- Google competes on numerous fields: --- Search: Baidu, Bing, Duck Duck Go
@tejaskhot
tejaskhot / git_extract_commits.sh
Last active September 14, 2015 07:13 — forked from xinan/git_extract_commits.sh
This is a simple shell script to extract commits by a specified author in a git repository and format them as numbered patches. It is useful for GSoC code submission.
if ! [[ $# -eq 1 || $# -eq 2 || $# -eq 4 ]]; then
echo "Usage: $0 <author> [<start_date> <end_date>] [output_dir]"
echo "Example: $0 xinan@me.com 2015-05-25 2015-08-21 ./patches"
exit
fi
author=$1
if [ $# -gt 3 ]; then
output_dir=$4