Skip to content

Instantly share code, notes, and snippets.

View adpoe's full-sized avatar
🎯
makin' the codes.

Tony Poerio adpoe

🎯
makin' the codes.
View GitHub Profile
@adpoe
adpoe / treeTraversals.hs
Created June 16, 2016 05:30
Haskell Binary Tree Traversals
-- Do Tree Traversals and Built a Visitation List for each
preorder :: BinaryTree a -> [a]
preorder Leaf = []
preorder (Node left root right) = root : preorder left ++ preorder right
-- NOTE: Need to use the ++ so each list gets built separately and then concatenated
-- after it hits bottom
inorder :: BinaryTree a -> [a]
inorder Leaf = []
@adpoe
adpoe / svm_and_sift.ipynb
Last active December 24, 2016 22:56
SVM and SIFT Nature Conservatory Kaggle Entry
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@adpoe
adpoe / keras_for_nature_conservatory_kaggle.ipynb
Last active December 24, 2016 01:47
A Quick Start Guide for Using Keras in the Nature Conservatory Kaggle
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@adpoe
adpoe / mv_grad_desc.py
Created December 16, 2016 01:31
Multivariate Gradient Descent in Python
def multivariate_gradient_descent(training_examples, alpha=0.01):
"""
Apply gradient descent on the training examples to learn a line that fits through the examples
:param examples: set of all examples in (x,y) format
:param alpha = learning rate
:return:
"""
# initialize the weight and x_vectors
W = [0 for index in range(0, len(training_examples[0][0]))]
@adpoe
adpoe / uni_gd.py
Created December 16, 2016 01:12
Univariate Gradient Descent, in Python
def gradient_descent(training_examples, alpha=0.01):
"""
Apply gradient descent on the training examples to learn a line that fits through the examples
:param examples: set of all examples in (x,y) format
:param alpha = learning rate
:return:
"""
# initialize w0 and w1 to some small value, here just using 0 for simplicity
w0 = 0
w1 = 0
@adpoe
adpoe / flappy_states.py
Created December 15, 2016 01:27
State Representations for a Flappy Bird AI
# first value in state tuple
height_category = 0
dist_to_pipe_bottom = pipe_bottom - bird.y
if dist_to_pipe_bottom < 8: # very close
height_category = 0
elif dist_to_pipe_bottom < 20: # close
height_category = 1
elif dist_to_pipe_bottom < 125: #mid
height_category = 2
elif dist_to_pipe_bottom < 250: # far
@adpoe
adpoe / energy.m
Created September 27, 2016 16:33
Energy Function - Matlab
function [ energy_matrix ] = energy_image( image_matrix_input )
%ENERGY_IMAGE Computes the energy at each pixel in a matrix nxmx3 matrix
% Outputs a 2D-matrix containing energy equation outputs, of datatype DBL
% convert image to grayscale first
G = rgb2gray(image_matrix_input);
% convert to double
G2 = im2double(G);
@adpoe
adpoe / binaryTree.hs
Created June 16, 2016 05:27
Binary Tree in Haskell
data BinaryTree a =
Leaf
| Node (BinaryTree a) a (BinaryTree a)
deriving (Eq, Ord, Show)
@adpoe
adpoe / quicksort.hs
Created June 16, 2016 04:45
Very simple Quicksort Implementation from "Learn You a Haskell For Great Good!"
quicksort :: (Ord a) => [a] -> [a]
quicksort [] = []
quicksort (x:xs) =
let smallerSorted = quicksort [a | a <- xs, a <= x]
biggerSorted = quicksort [a | a <- xs, a > x]
in smallerSorted ++ [x] ++ biggerSorted
@adpoe
adpoe / std_normal.py
Created March 25, 2016 14:50
standard normal random variates
# PROCEDURE, From ROSS: Simulation (5th Edition) Page 78
# Step 1: Generate Y1, an exponential random variable with rate 1
Y1 = gen_exponential_distro_rand_variable()
# Step 2: Generate Y2, an exponential random variable with rate 2
Y2 = gen_exponential_distro_rand_variable()
# Step 3: If Y2 - (Y1 - 1)^2/2 > 0, set Y = Y2 - (Y1 - 1)^2/2, and go to Step 4 (accept)
#         Otherwise, go to Step 1 (reject)
subtraction_value = ( math.pow( ( Y1 - 1 ), 2 ) ) / 2
critical_value = Y2 - subtraction_value
if critical_value > 0: