Skip to content

Instantly share code, notes, and snippets.

@tapanhp
Last active March 17, 2020 05:55
Show Gist options
  • Save tapanhp/d8904798ca15011f3d0f93b9f4a90a96 to your computer and use it in GitHub Desktop.
Save tapanhp/d8904798ca15011f3d0f93b9f4a90a96 to your computer and use it in GitHub Desktop.
To check how much RAM is allocated on google colab hosted runtime
import torch
print(torch.cuda.current_device()) #to check current device id
print(torch.cuda.device_count()) #to check number of devices
print(torch.cuda.get_device_name(0)) #to find device name
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
# function to check things
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment