Skip to content

Instantly share code, notes, and snippets.

@andrew128
Created December 23, 2024 01:22
Show Gist options
  • Save andrew128/bcc49798e2b1f6f6ec3a3a048f26fe59 to your computer and use it in GitHub Desktop.
Save andrew128/bcc49798e2b1f6f6ec3a3a048f26fe59 to your computer and use it in GitHub Desktop.
google colab environment
# Analyze environment
import os
import psutil
import subprocess
import torch
# Check CPU Info
def cpu_info():
cpu_details = subprocess.run(['cat', '/proc/cpuinfo'], stdout=subprocess.PIPE).stdout.decode()
return cpu_details.split("\n\n")[0]
# Check GPU Info
def gpu_info():
try:
gpu_details = subprocess.run(['nvidia-smi'], stdout=subprocess.PIPE).stdout.decode()
return gpu_details
except FileNotFoundError:
return "No GPU detected"
# Memory Info
def memory_info():
mem = psutil.virtual_memory()
return f"Total: {mem.total / (1024**3):.2f} GB, Available: {mem.available / (1024**3):.2f} GB"
# Disk Space Info
def disk_info():
disk = psutil.disk_usage('/')
return f"Total: {disk.total / (1024**3):.2f} GB, Free: {disk.free / (1024**3):.2f} GB"
# PyTorch Check
def pytorch_info():
gpu_available = torch.cuda.is_available()
if gpu_available:
gpu_count = torch.cuda.device_count()
device_info = [
f"Device {i}: {torch.cuda.get_device_name(i)}" for i in range(gpu_count)
]
return f"GPUs Available: {gpu_count}\n" + "\n".join(device_info)
else:
return "No CUDA-compatible GPU available"
# System Info Summary
print("----- System Specifications -----")
print("CPU Info:\n", cpu_info())
print("\nGPU Info:\n", gpu_info())
print("\nMemory Info:\n", memory_info())
print("\nDisk Space Info:\n", disk_info())
print("\nPyTorch Devices:\n", pytorch_info())
----- System Specifications -----
CPU Info:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) CPU @ 2.00GHz
stepping : 3
microcode : 0xffffffff
cpu MHz : 2000.194
cache size : 39424 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bhi
bogomips : 4000.38
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
GPU Info:
Sat Dec 21 23:01:25 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 43C P8 9W / 70W | 3MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
Memory Info:
Total: 12.67 GB, Available: 10.58 GB
Disk Space Info:
Total: 112.64 GB, Free: 79.96 GB
PyTorch Devices:
GPUs Available: 1
Device 0: Tesla T4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment