Skip to content

Instantly share code, notes, and snippets.

View CiaoHe's full-sized avatar
:octocat:
god bless us

He Cao CiaoHe

:octocat:
god bless us
View GitHub Profile
git config --global https.proxy http://127.0.0.1:1080
git config --global https.proxy https://127.0.0.1:1080
git config --global --unset http.proxy
git config --global --unset https.proxy
npm config delete proxy
@CiaoHe
CiaoHe / map_clsloc.txt
Created February 11, 2023 09:37 — forked from aaronpolhamus/map_clsloc.txt
Image net classes + labels
n02119789 1 kit_fox
n02100735 2 English_setter
n02110185 3 Siberian_husky
n02096294 4 Australian_terrier
n02102040 5 English_springer
n02066245 6 grey_whale
n02509815 7 lesser_panda
n02124075 8 Egyptian_cat
n02417914 9 ibex
n02123394 10 Persian_cat
@CiaoHe
CiaoHe / ddp_notes.md
Created September 15, 2022 02:46 — forked from TengdaHan/ddp_notes.md
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.