Skip to content

Instantly share code, notes, and snippets.

@telegraphic
Created September 23, 2016 00:37
Show Gist options
  • Star 9 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save telegraphic/ecb8161aedb02d3a09e39f9585e91735 to your computer and use it in GitHub Desktop.
Save telegraphic/ecb8161aedb02d3a09e39f9585e91735 to your computer and use it in GitHub Desktop.
Parse nvidia-smi from python
"""
Parse output of nvidia-smi into a python dictionary.
This is very basic!
"""
import subprocess
import pprint
sp = subprocess.Popen(['nvidia-smi', '-q'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out_str = sp.communicate()
out_list = out_str[0].split('\n')
out_dict = {}
for item in out_list:
try:
key, val = item.split(':')
key, val = key.strip(), val.strip()
out_dict[key] = val
except:
pass
pprint.pprint(out_dict)
@SrivastavaKshitij
Copy link

when i run your code, I get the following error. Could you help me with this ?

TypeError Traceback (most recent call last)
in ()
5
6 out_str = sp.communicate()
----> 7 out_list = out_str[0].split('\n')
8
9 out_dict = {}

TypeError: a bytes-like object is required, not 'str'

@yssybyl
Copy link

yssybyl commented May 23, 2017

out_str is being read as bytes not a string - you need to decode it prior to the split. Change line 12 (your line 7) to:
out_list = out_str[0].decode("utf-8").split('\n')
and it should work for you

@samidarko
Copy link

Just in case you didn't know... http://pypi.python.org/pypi/nvidia-ml-py/

@telegraphic
Copy link
Author

Thanks @yssybyl -- indeed the code needs that mod to run on Py3. I think @samidarko's link to nividia-ml-py is a better general solution (my gist doesn't parse everything in a useful way, e.g. it doesn't treat multiple GPUs separately).

@anuj-kh-quantela
Copy link

anuj-kh-quantela commented Feb 26, 2018

Great effort. Just want to point out something I found.
Your code gives out the GPU utilisation of the last process only whereas on running nvidia-smi -q, it can be seen that there could be a number of different processes consuming different amounts of GPU memory.

In the image below, it can be seen that there are 3 different processes.

actual-gpu-usage

But your code gives out the last one, probably you need to check for nested lists or something. Again screenshot of output of your code below.

reported-gpu-usage

@petronny
Copy link

Hi, I create a repo to do this.
It parses and maps the output of nvidia-smi -q into a Python object.
You can get the attributes like:

log = NVLog()
print(log['Attached GPUs']['GPU 00000000:04:00.0']['Processes'][0]['Used GPU Memory'])

And I have re-implemented the table when running nvidia-smi.

print(log.as_table())

Please see https://github.com/petronny/nvsmi

@denny4nl
Copy link

denny4nl commented Feb 8, 2021

One point need to be paid attention to.
If there are two or more processes on GPUs, the dict will overwrite the value of the key "Process ID",
also other keys.

@amanuel1995
Copy link

Just in case you didn't know... http://pypi.python.org/pypi/nvidia-ml-py/

The page has no description or documentation on how to use it. Did you find any resources on how to use the pip package?

@telegraphic
Copy link
Author

@amanuel1995 -- that package is looking a bit old now, but here's some alternative links:
https://github.com/jonsafari/nvidia-ml-py
https://github.com/nicolargo/nvidia-ml-py3
https://github.com/petronny/nvsmi
https://github.com/fbcotter/py3nvml

I haven't been keeping up to date on this front though, so there may be a better approach!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment