Skip to content

Instantly share code, notes, and snippets.

@suhrmann
Last active December 4, 2020 20:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save suhrmann/02da6ee960b66fa615ed50645f48ca3e to your computer and use it in GitHub Desktop.
Save suhrmann/02da6ee960b66fa615ed50645f48ca3e to your computer and use it in GitHub Desktop.
Run script for "Inference in the wild" of https://github.com/facebookresearch/VideoPose3D - Make sure requirements are met, described in comment
#!/bin/sh
# The input videos to detect - if directory apply script on each video of it - use absolute path!
INPUT_PATH="/path/to/your/videos/"
# Path were detected annotations and rendered videos are stored - use absolute path!
OUTPUT_DIR="$HOME/Development/VideoPose3D/output_directory"
# Custom name for the generated dataset - here: using name of directory $INPUT_PATH
# Used to generate data/data_2d_custom_<DATASET_NAME>.npz
DATASET_NAME=$(basename "$INPUT_PATH")
# Step 1: setup
#########################################################
echo 'STEP 1: setup...'
# Download the pretrained model for generating 3D predictions.
if [ ! -f checkpoint/pretrained_h36m_detectron_coco.bin ]; then
wget -O checkpoint/pretrained_h36m_detectron_coco.bin https://dl.fbaipublicfiles.com/video-pose-3d/pretrained_h36m_detectron_coco.bin
fi
echo ' > STEP 1: ✅'
# Step 2 (optional): video preprocessing
#########################################################
echo 'STEP 2: Skipping optional video preprocessing...'
# Cut start and end
#ffmpeg -i input.mp4 -ss 1:00 -to 1:30 -c copy output.mp4
# Optionally adapt the frame rate of the video - our Human3.6M model was trained on 50-FPS videos
#ffmpeg -i input.mp4 -filter "minterpolate='fps=50'" -crf 0 output.mp4
echo ' > STEP 2: ✅'
# Step 3: inferring 2D keypoints with Detectron
# Using Detectron2 (new)
#########################################################
echo 'STEP 3: inferring 2D keypoints with Detectron...'
mkdir -p "$OUTPUT_DIR" # Create output dir if not exists
cd inference
python infer_video_d2.py \
--cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
--output-dir "$OUTPUT_DIR" \
--image-ext mp4 \
"$INPUT_PATH"
cd ..
echo ' > STEP 3: ✅'
# Step 4: creating a custom dataset
#########################################################
echo 'STEP 4: creating a custom dataset...'
cd data
python prepare_data_2d_custom.py -i "$OUTPUT_DIR" -o "$DATASET_NAME"
cd ..
echo ' > STEP 4: ✅'
# Step 5: rendering a custom video and exporting coordinates
#########################################################
echo 'STEP 5: rendering a custom video and exporting coordinates...'
# Render ALL videos with pose information baked into
for video in "${INPUT_PATH}"*.mp4
do
name=$(basename "$video")
echo " Rendering $name"
python run.py \
-d custom \
-k "$DATASET_NAME" \
-arc 3,3,3,3,3 \
-c checkpoint \
--evaluate pretrained_h36m_detectron_coco.bin \
--render \
--viz-subject "$name" \
--viz-action custom \
--viz-camera 0 \
--viz-video "$video" \
--viz-output "${OUTPUT_DIR}/${name}__out.mp4" \
--viz-size 6
done
echo ' > STEP 5: ✅'
@suhrmann
Copy link
Author

suhrmann commented Dec 4, 2020

Run script for facebookresearch / VideoPose3D
❗Make sure you installed the dependencies: ❗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment