Skip to content

Instantly share code, notes, and snippets.

View keithmgould's full-sized avatar

Keith Gould keithmgould

  • New York
View GitHub Profile
<script type="text/javascript">
var mpq = [];
mpq.push(["init", "1c8c35d989982401990706fce6659790"]);
(function(){var b,a,e,d,c;b=document.createElement("script");b.type="text/javascript";b.async=true;b.src=(document.location.protocol==="https:"?"https:":"http:")+"//api.mixpanel.com/site_media/js/api/mixpanel.js";a=document.getElementsByTagName("script")[0];a.parentNode.insertBefore(b,a);e=function(f){return function(){mpq.push([f].concat(Array.prototype.slice.call(arguments,0)))}};d=["init","track","track_links","track_forms","register","register_once","identify","name_tag","set_config"];for(c=0;c<d.length;c++){mpq[d[c]]=e(d[c])}})();
</script>
<script type="text/javascript">
var MixPanelHelp = function() {
return {
// augments an event with basic info such as current user
@keithmgould
keithmgould / transport.core.js
Created May 20, 2012 15:11
require.JS module for socket.IO
define(["constants"], function (constants) {
var socket = io.connect(constants.socketURL),
sendables = ["new-message"];
return {
listen : function (callback) {
socket.on("new-message", function (msg) {
callback({ type : "receive-message", data : msg});
});
},
@keithmgould
keithmgould / core.transport.socket.io
Created May 20, 2012 15:23
transport core extension
CHAT.namespace("CHAT.CORE");
CHAT.CORE.transport = (function () {
var socket = io.connect("http://chat.local:3000"),
sendables = ["new-message"];
// not happy with the location of this functionality
socket.on("new-message", function (msg) {
CHAT.CORE.modules.emit({ type : "receive-message", data : msg});
});
alias undeployed="heroku releases -a APP_NAME | sed -n 2p | cut -d' ' -f4 | xargs -J % git log --oneline --decorate --color --graph master --not %"

OK. In an effort to learn this stuff (on a Saturday night of course), I also built the tables, indexes, and created fake data.

Here is the big realization: The join is expensive and pointless - it is not needed for the heavy lifting (counting/sorting). Split up into two queries (or a subquery).

First lets do it the original way:

# explain ANALYZE SELECT races.id, races.title, count(participants.id) AS participant_count
FROM races
   INNER JOIN participants ON races.id=participants.race_id
@keithmgould
keithmgould / myinit.m
Created May 14, 2017 23:11
Matlab Init for Simulink Biped Robot
% Taken from Robot
mp=1.793; % Mass of the pendulum (kg)
mw=0.156; % Mass of the wheel (kg)
Len=0.209; % Length to Center of Gravity(meters)
r=0.042; % Radius of the wheel (meters)
ip=0.07832; % inertia of the pendulum around the wheel axis
iw=0.000172125; % inertia of the wheel around the wheel axis
% Taken from Pololu
Rs=0.0024; % The resistance of the DC Motor
@keithmgould
keithmgould / balancer.m
Last active May 28, 2017 19:10
Matlab code for setting up Simulink Control. Observe only X and Theta
% Declare Constant Values
% Taken from Beaker2 (the robot)
mp=1.793; % Mass of the pendulum (kg)
mw=0.156; % Mass of the wheel (kg)
Len=0.209; % Length to Center of Gravity(meters)
r=0.042; % Radius of the wheel (meters)
ip=0.07832; % inertia of the pendulum around the wheel axis
iw=0.000172125; % inertia of the wheel around the wheel axis
@keithmgould
keithmgould / pg-pong.py
Created October 26, 2017 20:58 — forked from karpathy/pg-pong.py
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@keithmgould
keithmgould / discount.py
Created December 18, 2017 20:25
reinforce discounting
import numpy as np
gamma = 0.99
def discount_rewards(r):
discounted_r = np.zeros_like(r)
running_add = 0
for t in reversed(range(0, r.size)):
running_add = running_add * gamma + r[t]
@keithmgould
keithmgould / cartpole_pg.py
Last active December 22, 2017 22:49 — forked from shanest/cartpole_pg.py
Policy gradients for reinforcement learning in TensorFlow (OpenAI gym CartPole environment)
#!/usr/bin/env python
import gym
import numpy as np
import tensorflow as tf
from tensorflow.python.ops import random_ops
def _initializer(shape, dtype=tf.float32, partition_info=None):
return random_ops.random_normal(shape)