Skip to content

Instantly share code, notes, and snippets.

View zuliani99's full-sized avatar
🏋️‍♂️
Don't stop when you are tired. Stop when you are done!

Riccardo Zuliani zuliani99

🏋️‍♂️
Don't stop when you are tired. Stop when you are done!
View GitHub Profile
@mkocabas
mkocabas / coco.sh
Created April 9, 2018 09:41
Download COCO dataset. Run under 'datasets' directory.
mkdir coco
cd coco
mkdir images
cd images
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/zips/test2017.zip
wget http://images.cocodataset.org/zips/unlabeled2017.zip
@HaydenFaulkner
HaydenFaulkner / video_to_frames.py
Last active March 19, 2024 16:37
Fast frame extraction from videos using Python and OpenCV
from concurrent.futures import ProcessPoolExecutor, as_completed
import cv2
import multiprocessing
import os
import sys
def print_progress(iteration, total, prefix='', suffix='', decimals=3, bar_length=100):
"""
Call in a loop to create standard out progress bar
@TengdaHan
TengdaHan / ddp_notes.md
Last active October 1, 2024 09:06
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.