Skip to content

Instantly share code, notes, and snippets.

@dettmar
Last active Nov 8, 2022
Embed
What would you like to do?
Getting Tensorflow to run on TPUs in Google Colab
# don't forget to first switch to TPU (Runtime > Change runtime type)
import tensorflow as tf
# create the model (which is called later within the right scope)
# make sure that the input_shape or input_dim is given in the first layer
def createmodel():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(128, ..., input_shape=input_shape),
# ...
])
# set up the resolver for multi TPU (usually a cluster of 8 tpus)
resolver = tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.contrib.distribute.initialize_tpu_system(resolver)
strategy = tf.contrib.distribute.TPUStrategy(resolver)
# create and compile model within the tpu scope
with strategy.scope():
model = createmodel()
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.train.AdamOptimizer(), # note that keras optimizer do not yet work
metrics=['accuracy'])
# check that it's looking ok
model.summary()
# train the model (make sure that steps_per_epoch is an exact divisor of the total amount of training samples, to utilize all TPUs)
model.fit(x_train, y_train,
epochs=100,
steps_per_epoch=50)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment