Skip to content

Instantly share code, notes, and snippets.

Last active July 9, 2024 12:55
Show Gist options
  • Save nickkraakman/e351f3c917ab1991b7c9339e10578049 to your computer and use it in GitHub Desktop.
Save nickkraakman/e351f3c917ab1991b7c9339e10578049 to your computer and use it in GitHub Desktop.
FFmpeg cheat sheet for 360 video

FFmpeg Cheat Sheet for 360º video

Brought to you by Headjack

FFmpeg is one of the most powerful tools for video transcoding and manipulation, but it's fairly complex and confusing to use. That's why I decided to create this cheat sheet which shows some of the most often used commands.

Let's start with some basics:

  • ffmpeg calls the FFmpeg application in the command line window, could also be the full path to the FFmpeg binary or .exe file
  • -i is follwed by the path to the input video
  • -c:v sets the video codec you want to use
    Options include libx264 for H.264, libx265 for H.265/HEVC, libvpx-vp9 for VP9, and copy if you want to preserve the video codec of the input video
  • -b:v sets the video bitrate, use a number followed by M to set value in Mbit/s, or K to set value in Kbit/s
  • -c:a sets the audio codec you want to use Options include aac for use in combination with H.264 and H.265/HEVC, libvorbis for VP9, and copy if you want to preserve the audio codec of the input video
  • -b:a sets the audio bitrate of the output video
  • -vf sets so called video filters, which allow you to apply transformations on a video like scale for changing the resolution and setdar for setting an aspect ratio
  • -r sets the frame rate of the output video
  • -pix_fmt sets the pixel format of the output video, required for some input files and so recommended to always use and set to yuv420p for playback
  • -map allows you to specify streams inside a file
  • -ss seeks to the given timestamp in the format HH:MM:SS
  • -t sets the time or duration of the output

Get video info

ffmpeg -i input.mp4

Transcode video

The simplest example to transcode an input video to H.264:

ffmpeg -i input.mp4 -c:v libx264 output.mp4

However, a more reasonable example, which includes setting an audio codec, setting the pixel format and both a video and audio bitrate, would be:

ffmpeg -i input.mp4 -c:v libx264 -b:v 30M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

To tanscode to H.265/HEVC instead, all we do is change libx264 to libx265:

ffmpeg -i input.mp4 -c:v libx265 -b:v 15M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

iOS 11 and OSX 11 now support HEVC playback, but you have to make sure you use FFmpeg 3.4 or higher, and then add -tag:v hvc1 to your encode, or else you won't be able to play the video on your Apple device.

For VP9 we have to change both the video and the audio codec, as well as the file extension of the ouput video. We also added -threads 16 to make sure FFmpeg uses multi-threaded rendering to speed things up significantly:

ffmpeg -i input.mp4 -threads 16 -c:v libvpx-vp9 -b:v 15M -pix_fmt yuv420p -c:a libvorbis -b:a 192K output.webm

You may have noticed we also halved the video bitrate from 30M for H.264 to 15M for H.265/HEVC and VP9. This is because the latter ones are advanced codecs which output the same visual quality video at about half the bitrate of H.264. Sweet huh! They do take way longer to encode though and are not as widely supported as H.264 yet.

Hardware accelerated encoding

We just saw how to encode to H.264 using the libx264 codec, but the latest Zeranoe FFmpeg builds for Windows now support hardware accelerated encoding on machines with Nvidia GPUs (even older ones), which significantly speeds up the encoding process. You use this powerful feature by changing the libx264 codec to h264_nvenc:

ffmpeg -i input.mp4 -c:v h264_nvenc output.mp4

To use hardware acceleration for H.265/HEVC, use hevc_nvenc instead:

ffmpeg -i input.mp4 -c:v hevc_nvenc output.mp4

If you get any error messages, either your FFmpeg version or your GPU does not support hardware acceleration, or you are using an unsupported -pix_fmt. There is unfortunately no hardware acceleration support in FFmpeg for the VP9 codec.

We noticed one strange artefact when using h264_nvenc and hevc_nvenc in combination with scaling. For example, when we scaled a 4096x4096 video down to 3840x2160 pixels, the height of the output video showed correctly as 2160 pixels, but the stored_height was 2176 pixels for some reason, which causes issues when trying to play it back on Android 360º video players.

Resize video to UHD@30fps

At the moment, the most common playback resolution for 360º video is the UHD resolution of 3840x2160 at 30 frames per second. The commands we have to add for this are:

-vf scale=3840x2160,setdar=16:9 -r 30

Which results in something like this:

ffmpeg -i input.mp4 -vf scale=3840x2160,setdar=16:9 -r 30 -c:v libx265 -b:v 15M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

Add, remove, extract or replace audio

Add an audio stream to a video without re-encoding:

ffmpeg -i input.mp4 -i audio.aac -c copy output.mp4

However, in most cases you will have to re-encode the audio to fit your video container:

ffmpeg -i input.mp4 -i audio.wav -c:v copy -c:a aac output.mp4

Remove an audio stream from the input video using the -an command:

ffmpeg -i input.mp4 -c:v copy -an output.mp4

Extract an audio stream from the input video using the -vn command:

ffmpeg -i input.mp4 -vn -c:a copy output.aac

Replace an audio stream in a video using the -map command:

ffmpeg -i input.mp4 -i audio.wav -map 0:0 -map 1:0 -c:v copy -c:a aac output.mp4

You could add the -shortest command to force the output video to take the length of the shortest input file if the input audio file and the input video file are not exactly the same length

Sequence to video

Many high-end video pipelines work with DPX, EXR or TIFF sequences. To transform these sequences into video files, the easiest way is to specify the first file in the sequence as the input and then use -framerate to set the input frame rate and -r to set the output frame rate:

ffmpeg -i input_0001.dpx -framerate 59.94 -c:v libx264 -b:v 30M -r 29.97 -an output.mp4

Stereo to mono

We can use video filters to cut the bottom half of a stereoscopic top-bottom video to turn it into a monoscopic video:

ffmpeg -i input.mp4 -vf crop=h=in_h/2:y=0 -c:a copy output.mp4

Cut a piece out of a video

Use -ss to set the start time in the video and -t to set the duration of the segment you want to cut

ffmpeg -ss 00:01:32 -i input.mp4 -c:v copy -c:a copy -t 00:00:10 output.mp4

The above command seeks to 1.32 minutes in the video, and then outputs the next 10 seconds. As you can see, -ss is placed before the -i command, which results in way faster (but slightly less accurate) seeking.

Concatenate two videos

Concatenation is not possible with all video formats, but it works fine for MP4 files for example. There are a couple of ways to concatenate video files, but I will only describe the way that worked for me here, which requires you to create a txt file with the paths to the files you want to concatenate.

Only if the files you want to concatenate have the exact same encoding settings can you concatenate without re-encoding:

ffmpeg -f concat -i files.txt -c copy output.mp4

In the files.txt file, place urls to the files you want to concatenate:

file '/path/to/video1.mp4'
file '/path/to/video2.mp4'
file '/path/to/video3.mp4'

You can add -safe 0 if you are using absolute paths. If you miss some frames after concatenation, keep in mind that the concatenation happens on I-frames, so if you don't cut at exactly the right frame, FFmpeg will discard all frames up to the nearest I-frame before concatenating.

Copy link

Thanks a lot! I did my own cheat sheet but this is way better and much cleaner!
Just out of curiosity, what are the possibilities of the -map option? Pick a specific channel and modify it sort of thing?

Copy link

Hi Etienne,

Thanks for your comment! The -map function indeed allows you to "map" a stream in an input file to a stream in the output file. This allows you to add or replace streams in a file.


Copy link

jindig commented Apr 11, 2017

Thanks for sharing this! Super helpful.


Copy link

When running

ffmpeg -i input.mp4 -vf crop=h=in_h/2:y=0 -c:v copy -c:a copy output.mp4

I get:

Filtergraph 'crop=h=in_h/2:y=0' was defined for video output stream 0:0 but codec copy was selected.

This works:

ffmpeg -i input.mp4 -vf crop=h=in_h/2:y=0 -c:a copy output.mp4

cheers - James

Copy link

Ah thanks @jbroberg! I will change it.

Copy link

etienne3k commented Oct 2, 2017

Here's the best gathering of info I found on FFMPEG so far (beside this cheatsheet of course ;) )

And one real nice generator here:

Copy link

Hello Nick, is there a way to rotate the point of view say 90 degrees?

With exiftool I use InitialViewHeadingDegrees

Copy link

Not sure how to do that with FFmpeg, Detlef. We use Premiere for this.

Copy link

jacobmartinez3d commented Jun 6, 2018

Thank you for this Guide! This one might also be helpful:

ffmpeg -i <left_input> -i <right-input> -filter_complex "[0:v]scale=iw:ih/2[top]; [1:v]scale=iw:ih/2[bottom]; [top][bottom]vstack[ou]" -map "[ou]" -c:v libx264 -b:v 30M -pix_fmt yuv420p <squished over-under.mp4>

This will create a single squished over-under video from 2 separate input streams(left and right-eye renders).

Copy link

zcream commented Jul 14, 2018

I have 2x 1080p streams and I want to upscale them to 1920*2160 and get SBS Left-right. However, my encode ends up at 7680x2160. Any ideas why (should be 3840x2160) ?

ffmpeg -I -I -filter_complex "[0:v]scale=1920:2160[left]; [1:v]scale=1920:2160[right]; [left][right]hstack[sbs]" -map [sbs] -c:v libx264 -pix_fmt yuv420p -an sbs.mp4

Copy link

vorken commented Dec 5, 2018

I would like to add to Concatenate files scripts.
This bat creates list.txt with mp4 files sorted by name from the folder and concatenates them.

:: Edit the line below to match your path to the ffmpeg executable. set path2exe=C:\Users\.....\Downloads\ffmpeg\bin\ffmpeg.exe (for %%i in (*.mp4) do @echo file '%%i') > list.txt %path2exe% -f concat -i list.txt -c copy camera.mp4

This one searches for concatenate.bat files placed in subfolders and launches them in order.
@echo on For /R .\ %%a IN (*concatenate.bat) do ( cd "%%~da%%~pa" %%a)

Copy link

urish commented Oct 10, 2019

Add -strict unofficial flag to the ffmpeg command line in order to copy the video projection metadata as well. See this stackoverflow answer for more info.

Copy link

Kunkles commented Nov 12, 2019

is there a way to use FFMPEG with the new GoPro MAX? their app blows

Copy link

Thats a great cheat sheet thank you!
I am looking for a command which would allow me to take a .360 video (2 camera sensors - GoPro Max) and split it into 2 - to get raw data (video frames or even videos) - to have data from each sensor separately. I have been looking around, tried using qscale, but so far unsuccessful.
Any advice would be highly appreciated, thanks!

Copy link

tanwwg commented Feb 1, 2021

Great cheat sheet, thanks!

I'm wondering, does anyone know how we can create a non-VR (i.e. flattened) thumbnail from a VR video?

Copy link

Great cheat sheet, thanks!

I'm wondering, does anyone know how we can create a non-VR (i.e. flattened) thumbnail from a VR video?

Use jacobmartinez' solution to cut the video

ffmpeg -i <left_input> -i -filter_complex "[0:v]scale=iw:ih/2[top]; [1:v]scale=iw:ih/2[bottom]; [top][bottom]vstack[ou]" -map "[ou]" -c:v libx264 -b:v 30M -pix_fmt yuv420p

Then take a screenshot of it, and use fisheye (this is for diagonal):

ffmpeg -i [input] -vf v360=fisheye:equirect:ih_fov=360:iv_fov=360 [output]

Just an idea, didn't actually test it.

Copy link

tanwwg commented Feb 15, 2021

Figured it out, @cooperdk's comments were a great help.

I used a -vf filter of

to flatten out a sbs 180 video.

Copy link

Hey @nickkraakman (or anyone) do you know how to convert an EAC video from youtube to regular rectangular? I keep seeing people posting about using ffmpeg to do it, but no one says how. I work with with students so really new to the world of ffmpeg. Any help would be much appreciated.

Copy link

oldclock commented Oct 5, 2021

Hi @sultanreflects, I assume that you want a regular (no-3D no-VR) video, here is the command:
ffmpeg -i input.mp4 -vf v360=eac:rectilinear output.mp4

Copy link

He @oldclock thanks for the response. When you say no-3D no-VR you mean it won't be stereoscopic like TB or LR? And is there a way to retain the stereoscopic in the conversion? I'm trying to load 3D videos on our headsets but the schools wifi doesn't let you get on quest through the Quest so trying to put a library of history videos together that are already downloaded to the device and not streamed from youtube.

Copy link

oldclock commented Oct 7, 2021

Hi @sultanreflects, it seems that you just want to watch YT 360 videos offline, right ?
The easiest way is to download them by correct command then you will get equirectangle format (which is almost supported by any VR player) instead of EAC format.
For example, by passing --user-agent "" to youtube-dl.
For more information please see the discussion here.

Copy link

gmat commented Mar 23, 2022

And one real nice generator here:

The tool is now here :

Copy link

gmat commented Mar 23, 2022

is there a way to use FFMPEG with the new GoPro MAX? their app blows

You can use a of this forks
based of this work and this (link updated)
the use of the filter should be like this :
./ffmpeg -hwaccel opencl -v verbose -filter_complex '[0:0]format=yuv420p,hwupload[a] , [0:5]format=yuv420p,hwupload[b], [a][b]gopromax_opencl, hwdownload,format=yuv420p' -i IN.360 -c:v libx264 -map_metadata 0 -map 0:3 OUT.mp4

Copy link

I am using FFmpeg to convert dual fisheye 360 video recorded on a Rylo camera into equirectangular format. My goal is to stitch videos in bulk on the PC instead of using the Android app and dealing with large video files on my phone. I’ve got it stitching at an acceptable quality level. What I’m losing in the conversion is the "always upright" characteristic that is preserved when the Rylo Android app processes the same raw dual fisheye source file. I’m pretty sure the dual fisheye source .mp4 file must contain some metadata that the Android app is able to read. How can I make sure that data is processed when FFmpeg stitches that same source file?

Here's a video of the issue:

I posted this question in StackExchange:

Copy link

Youtube - half sideways (EAC) to equirectangular

If you come across the videos where the top is right-side up and the bottom is rotated clockwise 90 degrees, and you need equirectangular, you are probably after:

ffmpeg -i input.mp4 -vf "v360=eac:e" output.mp4

I found the answer on this awesome wiki

Copy link

Is there a way to change the point of view using ffmpeg within a 360 video?

Copy link

Is there a way to change the point of view using ffmpeg within a 360 video?

You can do it in Premiere Pro using the built in 360 editor.

Copy link

Thanks, but I need to change the POV at certain times in the video using ffmpeg, programmatically.

Copy link

rae commented Mar 15, 2024

Replace hevc_nvenc with hevc_videotoolbox for Apple Silicon Macs

Copy link

stalker314314 commented Apr 2, 2024

I am trying to split large file (to upload Insta360 to OpenStreetMap mapillary) and I am doing ffmpeg -i input.mp4 -c copy -map 0:0 -segment_time 00:05:00 -f segment -reset_timestamps 1 out%03d.mp4 but I cannot get side data to be added (I tried -strict experimental). Anyone have idea how to just simply split 360 video into multiple (but with same projection) 360 videos with side data?

edit: managed to add it without ffmpeg, in later step using exiftool -api LargeFileSupport=1 -tagsFromFile input.mp4 -all:all out000.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment