Skip to content

Instantly share code, notes, and snippets.

Last active Nov 8, 2022
What would you like to do?
Getting Tensorflow to run on TPUs in Google Colab
# don't forget to first switch to TPU (Runtime > Change runtime type)
import tensorflow as tf
# create the model (which is called later within the right scope)
# make sure that the input_shape or input_dim is given in the first layer
def createmodel():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(128, ..., input_shape=input_shape),
# ...
# set up the resolver for multi TPU (usually a cluster of 8 tpus)
resolver = tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
strategy = tf.contrib.distribute.TPUStrategy(resolver)
# create and compile model within the tpu scope
with strategy.scope():
model = createmodel()
optimizer=tf.train.AdamOptimizer(), # note that keras optimizer do not yet work
# check that it's looking ok
# train the model (make sure that steps_per_epoch is an exact divisor of the total amount of training samples, to utilize all TPUs), y_train,
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment