Skip to content

Instantly share code, notes, and snippets.

@mrdoob
Created October 30, 2011 02:48
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save mrdoob/1325393 to your computer and use it in GitHub Desktop.
Save mrdoob/1325393 to your computer and use it in GitHub Desktop.
ffmpeg: recording the screen.
ffmpeg -f x11grab -b 1M -bt 2M -r 30 -s 512x512 -i :0.0+1,53 -an kinect.webm
@chriskilding
Copy link

Hi, could you confirm whether the [OpenNI viewer -> ffmpeg's x11grab -> WebM/VP8 codec] combination captures all 16 bits/channel necessary to encode the Kinect depth map fully, or is it downsampled / truncated to 8 bits?

@remmel
Copy link

remmel commented Apr 14, 2021

@chriskilding :
I think you refers about the 16 bits requiered when working with grayscale as usually for kinect: 1 <=> 1mm, thus 16bits means max precision of 2^16-1=65536=65 meters. Max distance of Kinect is <10meters
But here the video captured has colors (https://threejs.org/examples/textures/kinect.webm) and on a computer usually the image displayed is 8bits per channel, thus rgb <=> 2^(8*3) = 16777216, so no information is lost (but probably some frame will be missing or duplicated)
Also, if working in browser, we cannot rely on browser PNG/JPG decoder to view the 16 bits grayscale image, as it only handle 8 bits per channel rgb thus 8 bits grayscale. In that case you will have read the image as binary and decode it youself to get the value of each pixel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment