Skip to content

Instantly share code, notes, and snippets.

@rish-16
Created May 29, 2021 07:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rish-16/cb759c10ae1b42cdcf47abb6edb50cd0 to your computer and use it in GitHub Desktop.
Save rish-16/cb759c10ae1b42cdcf47abb6edb50cd0 to your computer and use it in GitHub Desktop.
A guide on Colab TPU training using PyTorch XLA (Part 9)
'''
Configures some pipeline hyper-parameters. You
can set them to whatever you please.
You have the option of either mentioning it here
or creating variables inside the map_fn function.
This is entirely up to you. I do both for demonstration purposes.
'''
flags = {}
flags['batch_size'] = 32
flags['num_workers'] = 8 # we want to train on all 8 cores
flags['num_epochs'] = 10 # I already had the EPOCHS variable in map_fn
flags['seed'] = 42
# start the 8-core TPU and run map_fn on all workers
xmp.spawn(map_fn, args=(flags,), nprocs=8, start_method='fork')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment