Skip to content

Instantly share code, notes, and snippets.

View tejaskhot's full-sized avatar
🎯
Focusing

Tejas Khot tejaskhot

🎯
Focusing
View GitHub Profile
@WangZixuan
WangZixuan / Chamfer_Distance_Pytorch.py
Created May 18, 2018 14:08
Use Pytorch to calculate Chamfer distance
import torch
def chamfer_distance_without_batch(p1, p2, debug=False):
'''
Calculate Chamfer Distance between two point sets
:param p1: size[1, N, D]
:param p2: size[1, M, D]
:param debug: whether need to output debug info
@synapticarbors
synapticarbors / tsp-portrait2.py
Last active April 30, 2018 15:00
Traveling Salesman Portrait
'''
This script is based on the original work of Randal S. Olson (randalolson.com) for the Traveling Salesman Portrait project.
http://www.randalolson.com/2018/04/11/traveling-salesman-portrait-in-python/
Please check out the original project repository for information:
https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects
The script was updated by Joshua L. Adelman, adapting the work of Antonio S. Chinchón described in the following blog post:
https://fronkonstin.com/2018/04/17/pencil-scribbles/
@mikigom
mikigom / tf_bilinear_additive_upsampling.py
Created July 24, 2017 10:41
Tensorflow Implementation of Bilinear Additive Upsampling
import tensorflow as tf
"""
Author : @MikiBear_
Tensorflow Implementation of Bilinear Additive Upsampling.
Reference : https://arxiv.org/abs/1707.05847
"""
def bilinear_additive_upsampling(x, to_channel_num, name):
from_channel_num = x.get_shape().as_list()[3]
assert from_channel_num % to_channel_num == 0
@j-min
j-min / exp_lr_scheduler.py
Created June 25, 2017 14:07
learning rate decay in pytorch
# http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
def exp_lr_scheduler(optimizer, epoch, init_lr=0.001, lr_decay_epoch=7):
"""Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs."""
lr = init_lr * (0.1**(epoch // lr_decay_epoch))
if epoch % lr_decay_epoch == 0:
print('LR is set to {}'.format(lr))
for param_group in optimizer.param_groups:
@kashif
kashif / cem.md
Last active November 7, 2023 12:56
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

Interactive Machine Learning

Taught by Brad Knox at the MIT Media Lab in 2014. Course website. Lecture and visiting speaker notes.

@saliksyed
saliksyed / autoencoder.py
Created November 18, 2015 03:30
Tensorflow Auto-Encoder Implementation
""" Deep Auto-Encoder implementation
An auto-encoder works as follows:
Data of dimension k is reduced to a lower dimension j using a matrix multiplication:
softmax(W*x + b) = x'
where W is matrix from R^k --> R^j
A reconstruction matrix W' maps back from R^j --> R^k
@myungsub
myungsub / iccv2015.md
Last active May 17, 2017 10:23
upload candidates to awesome-deep-vision

Vision & Language

  • Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images

    • Mateusz Malinowski, Marcus Rohrbach, Mario Fritz
  • Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

    • Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
  • Learning Query and Image Similarities With Ranking Canonical Correlation Analysis

  • Wah Ngo

@karpathy
karpathy / gist:587454dc0146a6ae21fc
Last active May 16, 2024 19:55
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@debasishg
debasishg / gist:b4df1648d3f1776abdff
Last active January 20, 2021 12:15
another attempt to organize my ML readings ..
  1. Feature Learning
  1. Deep Learning