Created
May 29, 2021 07:02
-
-
Save rish-16/cb759c10ae1b42cdcf47abb6edb50cd0 to your computer and use it in GitHub Desktop.
A guide on Colab TPU training using PyTorch XLA (Part 9)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
''' | |
Configures some pipeline hyper-parameters. You | |
can set them to whatever you please. | |
You have the option of either mentioning it here | |
or creating variables inside the map_fn function. | |
This is entirely up to you. I do both for demonstration purposes. | |
''' | |
flags = {} | |
flags['batch_size'] = 32 | |
flags['num_workers'] = 8 # we want to train on all 8 cores | |
flags['num_epochs'] = 10 # I already had the EPOCHS variable in map_fn | |
flags['seed'] = 42 | |
# start the 8-core TPU and run map_fn on all workers | |
xmp.spawn(map_fn, args=(flags,), nprocs=8, start_method='fork') |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment