Skip to content

Instantly share code, notes, and snippets.

@MITsVision
Last active June 23, 2020 18:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save MITsVision/0366b719a85475003e007c969fae45dd to your computer and use it in GitHub Desktop.
Save MITsVision/0366b719a85475003e007c969fae45dd to your computer and use it in GitHub Desktop.
import tensorflow as tf
#Detect TPUs, GPUs
# Detect hardware, return appropriate distribution strategy
try:
# TPU detection. No parameters necessary if TPU_NAME environment variable is
# set: this is always the case on Kaggle.
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
# Default distribution strategy in Tensorflow. Works on CPU and single GPU.
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment