Skip to content

Instantly share code, notes, and snippets.

🎯
Focusing

陈诚 Cheneng

🎯
Focusing
Block or report user

Report or block Cheneng

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@jeasinema
jeasinema / weight_init.py
Last active Oct 21, 2019
A simple script for parameter initialization for PyTorch
View weight_init.py
#!/usr/bin/env python
# -*- coding:UTF-8 -*-
import torch
import torch.nn as nn
import torch.nn.init as init
def weight_init(m):
'''
@Tushar-N
Tushar-N / pad_packed_demo.py
Last active Sep 28, 2019
How to use pad_packed_sequence in pytorch
View pad_packed_demo.py
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
seqs = ['gigantic_string','tiny_str','medium_str']
# make <pad> idx 0
vocab = ['<pad>'] + sorted(set(''.join(seqs)))
# make model
@karpathy
karpathy / pg-pong.py
Created May 30, 2016
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
View pg-pong.py
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
You can’t perform that action at this time.