Skip to content

Instantly share code, notes, and snippets.

View soumith's full-sized avatar

Soumith Chintala soumith

View GitHub Profile
@soumith
soumith / pytorch_api_categorization.md
Last active November 16, 2018 06:17 — forked from ailzhang/pytorch_api_level.md
Pytorch API categorization.md

Torch level 1

function Symbolic_implemented
gather
equal
__and__, __iand__, __or__, __ior__, __xor__, __ixor__, __lshift__, __ilshift__, __rshift__, __irshift__
min, max
all
any
frac yes
@soumith
soumith / out.log
Created February 12, 2018 21:57 — forked from anonymous/out.log
[WARNING]: No mapping options supplied. 'Naive' options will be used which might fail compilation
[WARNING]: Autotuning results won't be cached. 'cache' option is not specified
[WARNING]: Using naive options for autotuning
template<typename T> inline __device__ T floord(T n, T d) {
return n < 0 ? - (-n + d - 1)/d : n / d;
}
// Halide type handling
typedef int int32;
import torch
import torch.nn as nn
import torch.nn.parallel
class DCGAN_D(nn.Container):
def __init__(self, isize, nz, nc, ndf, ngpu, n_extra_layers=0):
super(DCGAN_D, self).__init__()
self.ngpu = ngpu
assert isize % 16 == 0, "isize has to be a multiple of 16"
import torch.multiprocessing as mp
from torch.multiprocessing import Semaphore
import sys
if sys.version_info[0] == 3:
Barrier = mp.Barrier
else: # version 2
# from http://stackoverflow.com/a/26703365/117844
class Barrier:
@soumith
soumith / multiple_learning_rates.lua
Created May 26, 2016 21:35 — forked from farrajota/multiple_learning_rates.lua
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@soumith
soumith / gist:6011923
Last active December 19, 2015 20:09 — forked from culurciello/gist:5189137
#!/usr/bin/env torch
require 'nn'
require 'image'
require 'xlua'
require 'pl'
opt = lapp[[
-t,--threads (default 8) number of threads
-p,--type (default float) float or cuda