Skip to content

Instantly share code, notes, and snippets.

import torch
import torch.nn as nn
import torch.nn.functional as F
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.conv1 = nn.Conv1d(9, 18, kernel_size=3) #9 input channels, 18 output channels
self.conv2 = nn.Conv1d(18, 36, kernel_size=3) #18 input channels from previous Conv. layer, 36 out
self.conv2_drop = nn.Dropout2d() #dropout
@vanangamudi
vanangamudi / tf.cond.py
Created May 23, 2017 13:19
Tensorflow conditional statement
import tensorflow as tf
training = tf.placeholder(dtype=tf.bool, name='is_training')
a = tf.placeholder(dtype=tf.float32, name='a')
b = tf.placeholder(dtype=tf.float32, name='b')
c = tf.cond(training, lambda : a+b, lambda : a*b)
sess = tf.Session()
@greydanus
greydanus / rl_pong.py
Last active November 24, 2018 12:11
Solves Pong with Policy Gradients in Tensorflow.
'''Solves Pong with Policy Gradients in Tensorflow.'''
# written October 2016 by Sam Greydanus
# inspired by karpathy's gist.github.com/karpathy/a4166c7fe253700972fcbc77e4ea32c5
import numpy as np
import gym
import tensorflow as tf
# hyperparameters
n_obs = 80 * 80 # dimensionality of observations
h = 200 # number of hidden layer neurons
@shanest
shanest / cartpole_pg.py
Last active February 17, 2020 03:26
Policy gradients for reinforcement learning in TensorFlow (OpenAI gym CartPole environment)
#!/usr/bin/env python
import gym
import numpy as np
import tensorflow as tf
class PolicyGradientAgent(object):
def __init__(self, hparams, sess):
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@paulirish
paulirish / what-forces-layout.md
Last active May 12, 2024 13:20
What forces layout/reflow. The comprehensive list.

What forces layout / reflow

All of the below properties or methods, when requested/called in JavaScript, will trigger the browser to synchronously calculate the style and layout*. This is also called reflow or layout thrashing, and is common performance bottleneck.

Generally, all APIs that synchronously provide layout metrics will trigger forced reflow / layout. Read on for additional cases and details.

Element APIs

Getting box metrics
  • elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight, elem.offsetParent
@karpathy
karpathy / min-char-rnn.py
Last active May 11, 2024 20:19
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@ChadFulton
ChadFulton / estimate_rbc.ipynb
Last active March 8, 2023 08:35
Estimating a Real Business Cycle DSGE Model by Maximum Likelihood in Python
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@tsiege
tsiege / The Technical Interview Cheat Sheet.md
Last active May 14, 2024 04:49
This is my technical interview cheat sheet. Feel free to fork it or do whatever you want with it. PLEASE let me know if there are any errors or if anything crucial is missing. I will add more links soon.

ANNOUNCEMENT

I have moved this over to the Tech Interview Cheat Sheet Repo and has been expanded and even has code challenges you can run and practice against!






\