Skip to content

Instantly share code, notes, and snippets.

Joost van Amersfoort y0ast

Block or report user

Report or block y0ast

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@y0ast
y0ast / Faster MNIST.md
Last active Jul 23, 2019
Train 2-3x faster on MNIST with much less CPU usage by making a few simple changes to the PyTorch provided one.
View Faster MNIST.md

The PyTorch MNIST dataset is SLOW by default, because it wants to conform to the usual interface of returning a PIL image. This is unnecessary if you just want a normalized MNIST and are not interested in image transforms (such as rotation, cropping). By folding the normalization into the dataset initialization you can save your CPU and speed up training by 2-3x.

The bottleneck when training on MNIST with a GPU and a small-ish model is the CPU. In fact, even with six dataloader workers on a six core i7, the GPU utilization is only ~5-10%. Using FastMNIST increases GPU utilization to ~20-25% and reduces CPU utilization to near zero. On my particular model the steps per second with batch size 64 went from ~150 to ~500.

Instead of the default MNIST dataset, use this:

import torch
from torchvision.datasets import MNIST
View keybase.md

Keybase proof

I hereby claim:

  • I am y0ast on github.
  • I am joostvamersfoort (https://keybase.io/joostvamersfoort) on keybase.
  • I have a public key whose fingerprint is 27A6 D6DC 0B9C CCE9 2AC4 25AB 2132 0579 C58E 9A16

To claim this, I am signing this object:

@y0ast
y0ast / Tutorial.md
Last active Nov 23, 2015
Tutorial for using Torch7 on Amazon EC2 GPUs
View Tutorial.md

There used to be a tutorial here for using Torch7 on EC2, but it's now outdated. It is best to use an EC2 image that already has Torch7 and CUDA stuff preinstalled.

You can’t perform that action at this time.