Skip to content

Instantly share code, notes, and snippets.

@rish-16
Created May 29, 2021 05:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rish-16/2898b17303753e51c6f0199fd07c555f to your computer and use it in GitHub Desktop.
Save rish-16/2898b17303753e51c6f0199fd07c555f to your computer and use it in GitHub Desktop.
A guide on Colab TPU training using PyTorch XLA (Part 2)
# download and install PyTorch XLA
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl
# basic torch sub-modules (feel free to add on [eg: einops, time, random, etc.])
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# TPU-specific libraries (must-haves)
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
import torch_xla.distributed.parallel_loader as pl
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.utils.utils as xu
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment