Skip to content

Instantly share code, notes, and snippets.

@bamford
Created April 12, 2022 14:55
Show Gist options
  • Save bamford/8c3a509ea07096610e4f52816b8b9446 to your computer and use it in GitHub Desktop.
Save bamford/8c3a509ea07096610e4f52816b8b9446 to your computer and use it in GitHub Desktop.
This function initialises TensorFlow to work in a friendly way in a shared, multi-GPU environment. It identifies the GPU with the most free RAM, selects it for use and sets TF to only claim the amount of RAM actually required to run the model, so other jobs can run on the same GPU. Uses `nvsmi`, which can be installed using pip.
import numpy as np
import tensorflow as tf
from tensorflow.config import list_logical_devices, list_physical_devices, set_visible_devices
from tensorflow.config.experimental import set_memory_growth
def setup_tensorflow(seed=None):
try:
import nvsmi
gpu_mem_free = np.array([gpu.mem_free for gpu in nvsmi.get_gpus()])
if len(gpu_mem_free) > 0 and (gpu_mem_free > 0).any():
idx = np.argmax(gpu_mem_free)
gpu = list_physical_devices("GPU")[idx]
set_memory_growth(gpu, True)
set_visible_devices(gpu, "GPU")
gpus = list_logical_devices("GPU")
print(f"Using GPU: {gpus}")
else:
raise Exception
except:
print(f"Tensorflow using CPU")
tf.random.set_seed(seed)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment