Skip to content

Instantly share code, notes, and snippets.

View williamFalcon's full-sized avatar
🎯
Focusing

William Falcon williamFalcon

🎯
Focusing
View GitHub Profile
@williamFalcon
williamFalcon / readme.md
Created December 15, 2016 19:27 — forked from baraldilorenzo/readme.md
VGG-19 pre-trained model for Keras

##VGG19 model for Keras

This is the Keras model of the 19-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

@williamFalcon
williamFalcon / Pytorch_LSTM_variable_mini_batches.py
Last active April 24, 2024 17:53
Simple batched PyTorch LSTM
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn import functional as F
"""
Blog post:
Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health:
https://medium.com/@_willfalcon/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e
"""
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn import functional as F
"""
Blog post:
Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health:
https://medium.com/@_willfalcon/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e
"""
"""
Blog post:
Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health:
https://medium.com/@_willfalcon/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e
"""
def forward(self, X, X_lengths):
# reset the LSTM hidden state. Must be done before you run a new batch. Otherwise the LSTM will treat
# a new batch as a continuation of a sequence
self.hidden = self.init_hidden()
"""
Blog post:
Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health:
https://medium.com/@_willfalcon/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e
"""
def loss(self, Y_hat, Y, X_lengths):
# TRICK 3 ********************************
# before we calculate the negative log likelihood, we need to mask out the activations
# this means we don't want to take into account padded items in the output vector
@williamFalcon
williamFalcon / template.py
Created July 29, 2019 01:45
PTL template
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as ptl
class CoolModel(ptl.LightningModule):
from pytorch_lightning import Trainer
from test_tube import Experiment
model = CoolModel()
exp = Experiment(save_dir=os.getcwd())
# train on cpu using only 10% of the data (for demo purposes)
trainer = Trainer(experiment=exp, max_nb_epochs=1, train_percent_check=0.1)
# train on 4 gpus
from pytorch_lightning import Trainer
model = LightningModule(…)
trainer = Trainer()
trainer.fit(model)
dataset = MNIST(root=self.hparams.data_root, train=train, download=True)
loader = DataLoader(dataset, batch_size=32, shuffle=True)
for batch in loader:
x, y = batch
model.training_step(x, y)
...
# slow
loader = DataLoader(dataset, batch_size=32, shuffle=True)
# fast (use 10 workers)
loader = DataLoader(dataset, batch_size=32, shuffle=True, num_workers=10)