Skip to content

Instantly share code, notes, and snippets.

@mratsim
Last active February 7, 2023 09:41
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save mratsim/7d60dd01a7e8e777a52bc753240a1474 to your computer and use it in GitHub Desktop.
Save mratsim/7d60dd01a7e8e777a52bc753240a1474 to your computer and use it in GitHub Desktop.
NVIDIA GPU monitoring
1. nvidia-smi -q -g 0 -d TEMPERATURE,POWER,CLOCK,MEMORY -l #Flags can be UTILIZATION, PERFORMANCE (on Tesla) ...
2. nvidia-smi dmon
3. nvidia-smi -l 1
Docs: http://developer.download.nvidia.com/compute/cuda/6_0/rel/gdk/nvidia-smi.331.38.pdf
Python-bindings: https://pypi.python.org/pypi/nvidia-ml-py/
1. Loop:
==============NVSMI LOG==============
Timestamp : Fri Jan 27 23:53:27 2017
Driver Version : 375.26
Attached GPUs : 1
GPU 0000:01:00.0
Performance State : P2
Clocks Throttle Reasons
Idle : Not Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
Sync Boost : Not Active
Unknown : Active
Temperature
GPU Current Temp : 47 C
GPU Shutdown Temp : 99 C
GPU Slowdown Temp : 96 C
Power Readings
Power Management : Supported
Power Draw : 39.43 W
Power Limit : 180.00 W
Default Power Limit : 180.00 W
Enforced Power Limit : 180.00 W
Min Power Limit : 90.00 W
Max Power Limit : 215.00 W
Power Samples
Duration : 2.36 sec
Number of Samples : 119
Max : 39.46 W
Min : 39.32 W
Avg : 39.41 W
2. Series of line for
gpu pwr temp sm mem enc dec mclk pclk
3. Pooling of nvidia-smi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment