Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
10-Bit H.264 explanation
10-Bit H.264
For all those who haven’t heard of it already, here’s a quick rundown about the
newest trend in making our encodes unplayable on even more systems: So-called
high-bit-depth H.264. So, why another format, and what makes this stuff
different from what you know already?
First off: What is bit depth?
In short, bit depth is the level of precision that’s available for storing color
information. The encodes you’re used to have a precision of 8 bits (256 levels)
per color channel. There are usually three color channels, so that makes a bit
depth of 24 bits per pixel, which is also the most commonly used bit depth of
modern desktop PCs. Now, you can use a higher bit depth for video encoding, and
x264 currently allows up to 10 bits per channel (1024 levels and 30
bits per pixel), and of course that allows for much higher precision.
But: Most graphics cards and display devices don’t allow more than 24 bits per
pixel.
This makes higher bit depth sound pretty pointless, so why are we doing this?
Here’s a bit of side info: Most LCD displays (TN panels to be precise) can only
represent a bit depth of 6 bits per channel (a mere 64 levels). This would look
pretty awful under normal circumstances, so these displays use a little trick
called “dithering” to simulate a bit depth of 8 bits per channel. In simplified
terms, this means that the panel’s controller quickly alternates between the
nearest colors in a dynamic pattern. When done correctly, this creates the
illusion of a higher color accuracy than what the panel is actually capable of
displaying.
The exact same trick can be used to display high-bit-depth encodes.
But by that logic, couldn’t we just encode with 8 bits and hardcode that
dithering?
Of course that’s possible, and in fact we’re already doing this to prevent
so-called banding (http://en.wikipedia.org/wiki/Colour_banding).
But this also has a big drawback: The bitrate required to keep the dithering
intact is disproportionately high.
This brings us to the real advantage of higher bit depths: We can save bandwidth
even if the source only uses 8 bits per channel.
That’s right: Not only do we no longer need to hardcode any dithering, but higher
bit depth also means higher error tolerance. Losing one bit of information in
an 8-bit color space is equivalent to losing three bits in a 10-bit color space,
and thus the same quality can be achieved with less bitrate. Want an example?
One of my first tests was encoding episode 13 of Shakugan no Shana from a DVD
source, with dithering added to prevent banding. I used the exact same input and
settings for both encodes.
The video track of the 8-bit encode has 275 MiB, while the 10-bit encode has no
more than 152 MiB and doesn’t look worse at all -- in fact, it even looks better
than the much larger 8-bit encode.
Now, if I hadn’t hardcoded the dithering for the 10-bit encode and instead
passed a high-bit-depth picture to x264, it would’ve resulted in even better
perceived quality and an even smaller file size!
That’s terrific, but there has to be a catch to this, right?
Unfortunately, yes. Software support is currently lacking in a lot of places,
but it’s being worked on. Decoders that don’t support higher bit depths don’t
simply fail to decode anything, but decode wrong information, which leads to
really annoying artifacts: http://screenshots.srsfckn.biz/10bit-decodefail.png
Note that also none of the available hardware accelerated decoders (VDPAU, DXVA,
CUVID, etc.) support this.
Currently, you have the following options for playing such content:
1. MPlayer2 (cross-platform, Windows builds at http://mplayer2.srsfckn.biz)
You might want to use SMPlayer as GUI (http://smplayer.srsfckn.biz)
2. VLC (cross-platform, use the nightly builds at
http://nightlies.videolan.org/build/win32/last/)
It’s not as bad as it used to be, seriously.
3. CCCP Beta (http://www.cccp-project.net/beta/)
Note that this is currently a CCCP exclusive feature, so you will not get
this by simply installing the most recent ffdshow-tryouts.
And what does this all mean for my precious fagsubs?
It means that we’re doing dual encodes until compatible software is more readily
available (i.e. CCCP supports it in a release build), but it also implies the
following:
1. much smaller encodes with the same or better perceived quality
2. slightly smaller but better looking encodes
3. same file size but much better quality, right up to transparency
(http://en.wikipedia.org/wiki/Transparency_(data_compression))
So, things can only get better! I’ll keep you posted.
============================JEEB’s Rant=================================
Just a quickie on current 10bit H.264 support:
- ffmpeg/libav have now had it for ~months (made by irock, they now have asm
optimizations by Jumpyshoes)
- mplayer(2) has had support for some time now ( these builds recommended, can be
used with smplayer if you need a front-end )
- VLC will have it in their next release ( you can test with nightlies from here )
- Lord patched it into FFDShow-tryouts (and I undumbed its swscale usage flags
so that RGB output wouldn’t look like crap). It should work fine’ish, although
we are still scratching off some rough edges. Like the fact that it seems like
we’ve stumbled onto a bug in VSFilter not really having as correct color conversions
as possible inside. Of course, whether or not the effects of this bug are visible
to people is a whole separate affair. Regardless, we’re working on it.
What is this whole “10bit” affair?
Higher-than-8bit colorspaces are part of the H.264 standard, usually until now only
used in the “professional” zone. It’s not really anything new, and there actually was
at least one DirectShow decoder for it available on the internet before libavcodec
got one (trivia: MainConcept’s broadcast decoder). It just wasn’t picked up by the
media companies for the masses, where the choice went towards Blu-ray just hitting
the source with immense amounts of bitrate paired with 8bit (and thus no open source
entrepreneur had yet taken it into his or her TODO list until irock developed 10bit
encoding routines into x264 during last year’s GSoC program).
Unlike what would probably come to your mind first when thinking about “higher bit
depth in color”, its biggest merit for most of the people is not in the capability
of actually having a way to keep 10bit things 10bit (as most people pretty much have
no way of getting such content originally), or in the fact that you could use hyper
special rendering straight onto a 30bit display or whatever. It’s compression.
Even if your source is originally 8bit, encoding it in 10bit (in case of lossy
compression, of course — otherwise the “redundant” data will actually start biting
us. Although the output of course wouldn’t be identical compared to the 8bit source
either in such a case, either) will have the merit of making the output suffer less
from various compression artifacts. In layman’s terms, this means that lossy
compression will be more efficient in leaving things pretty, leading to smaller
files looking better in the end (Ateme’s PDF on this).
Not to mention that even if one converts the 10bit picture into a 8bit one to make
it easier to deal with (for such stuff as playback etc.), the difference is usually
miniscule (after all, we are in the same 4:2:0 colorspace), or might even look better
as some ways of conversion use dithering in the process.
@Ragazzo

This comment has been minimized.

Copy link

commented Jan 5, 2016

@l4n9th4n9 thanks for the explanation 👍

@Lakis01

This comment has been minimized.

Copy link

commented Jun 20, 2016

Thanks as well! Very good explanation!
Does kind of the same apply to 12-bit transcodes?
Calculating effort aside is it better to encode a x264 8-bit high-bitrate source to x265 10-bit or 12-bit; purely on grounds of subjective quality / Filesize?

@rpgdev

This comment has been minimized.

Copy link

commented Jul 15, 2016

This seems to be a repost of an old post but let me just say this here once again because it still makes my blood boil: DISK SPACE IS CHEAP. It is cheap now and it certainly was cheap years ago even before the hi10p fad was imposed on the entire community. The amount of pain and inconvenience this whole hi10p "decision" has brought is completely unjustified when you consider the supposed benefits. If after all this the best thing you can say for yourself is that you helped people save some space then this is not even funny: it's just a fucking tragedy. To top it off most people won't ever see the difference on the actual image being reproduced. So let's see where this leaves us: people STILL can't play their hi10p anime on their iPhones, Android phones or tablets, iPads, Apple TV, Fire TV, chromecast or any Android box. Not even the most powerful Android TV device yet (the Nvidia Shield) has a perfect hi10p playback. So essentially you still need a computer to play it. In this day and age when everybody is cord-cutting and we can consume our multimedia seamlessly on any device we STILL would need to power on our computer to watch this stuff. The icing on the cake is that if 10bit is what they needed they could easily move to x265 10bit, a standard that it's actually being supported by almost every hardware manufacturer worth naming but then I remember that the purpose is to inconvenience the most people possible so it makes sense that they haven't moved to that standard yet. I hope you're all happy.

@joegood

This comment has been minimized.

Copy link

commented Jul 29, 2016

By 'perfect playback', what do you mean? I'm asking honestly and not jabbing you. I'm trying to gain some insight into h.265.

I'm quite the noob with h.265/HEVC but recently purchased the Shield and have been busy doing the grunt work transcoding my discs and using Plex as the server. I heard the Shield had HEVC decoding on the hardware so I've been experimenting to see what benefits or limitations there are, trying every encoding profile and option. On low end PCs, I can't even view an HEVC video directly in VLC. But, those same PCs can stream it just fine thru Plex without transcoding, even at 10-bit. I've had numerous Android phones and Win10 PCs streaming at the same time, all with varying levels of CPU, and they can all play these videos. I have an older phone that always triggers transcoding, but the server did the heavy lifting on on that. I'm not going to try them directly, though. Copying a 12GB encoding of Avatar to a phone doesn't make sense.

Visually, the h.265 files mostly look like the h.264. I have to use 'mostly' because there is one issue I can't seem to overcome, yet. (the noob blues) The original X-Men has a lot of dimly lit scenes and they look horrible in h.265. In the end, it doesn't make sense to throw HEVC at that one anyway just to save a GB.

I need to counter, though, about the disk space being cheap. That's only for certain systems. If I was going the dedicated PC or NAS route, yes, it would be quite easy to get into the double-digit TB range inexpensively. But, not everybody is going to do that. Most devices are going to have a relatively small (<500GB) storage footprint, ranging from phones and tablets (<=128GB) to laptops (<500GB). Newer laptops have extra space if you specifically add it but most people don't. I have a 1TB secondary drive in mine and completely filled that up during this ripping project. So I started adding some 2TB USB drives. To get ones that look good next to the Shield, I'm somewhat limited as to the max space I can have there. Even when you look at enterprise decisions, small savings in space yield tremendous gains. Wasn't it Dropbox that recently announced they're re-encoding JPEGs now with their own internal proprietary compression to gain a 20% increase in space that resulted in PB of storage freed up. If I stuck to DVDs, I think I could maintain this. The growth rate of small USB drives and flash sticks is going to outpace the space I would need. But blu-ray discs are a different beast entirely. I'm working with 1080p only and h.265 Avatar was a 12GB, far larger with h.264. 10 epic BD movies will eat up a significant portion of a 1TB drive. I haven't purchased one yet, but I'm itching to see what a 4K BD looks like. When presented with those file size options, I don't know if drives in my form factor will scale and still be considered cheap.

@Michaelangel007

This comment has been minimized.

Copy link

commented Sep 6, 2016

Typo on last word in line Line 71 ?
i.e. Shouldn't that be fan subs ?

@aslian

This comment has been minimized.

Copy link

commented Jan 3, 2017

I H-A-T-E those 10bit releases. Now I need to transcode all this stuff back to 8bit before I can watch it. Really, f██ your color spaces and other s██t.

@rpgdev

This comment has been minimized.

Copy link

commented Apr 4, 2017

@joegood I was talking about hi10p (x264 10bit). HEVC is x265 10bit and it's actually being supported by a lot of manufacturers. It's relatively easy to find a box that's able to reproduce HEVC these days and the nvidia shield should reproduce them without any issue whatsoever. Some groups are releasing anime using HEVC but too few to matter. I think most are actually going to stick with hi10p if you can believe it. So for the foreseeable future we're still gonna be able to reproduce a 1080p 15gb rip of an Avengers movie in our phones flawlessly (shit movies but I'm making a point here) but not a 22 minute, 300mb episode of an animated series. Which makes a lot of fu****ng sense.

@Igetin

This comment has been minimized.

Copy link

commented Jul 25, 2017

I can only hope that the 10-bit H.264 meme within the anime scene dies when the AV1 format is supported by most hardware vendors.

@doyouevenunderstand

This comment has been minimized.

Copy link

commented Aug 14, 2017

I'm tired of all the misconceptions so let me explain this as simply as possbile:

  • The first result on Google for "10 bit encode of 8 bit content" is an ad by a company that sells 10-bit encoding equipment. The supposed gains in compression are when the HVEC codec is compared to ancient MPEG-2V. Duh, a modern codec is more efficient than an ancient codec from the 1990s. This has nothing to do with 10-bit versus 8-bit, the gains are purely because modern codecs are more efficient in general. So don't use MPEG-2V, use something modern.

  • This page is the second result on Google. It's a terrible explanation full of errors. The author describes how they encoded a video from a DVD source and added dithering....

  1. Dithering always reduces image quality at equivalent or higher bitrates than the source. If the source has banding, dithering makes it fuzzier, but it doesn't improve the image. Dithering should only be used when REDUCING bit depth, as a means to increase the amount of available information. If you keep the same bit depth or increase the bit depth, dithering only adds noise and reduces the amount of available information.
  2. Smaller filesize results from using a more advanced (modern) codec, not from increasing the bit depth. "MPEG2 took 275MB and h264 took 152MB" simply shows that the newer codec is more efficient. You would get the exact same results if both files were 8-bit.
  3. 8-bit sources cannot magically look better when converted to 10-bit the same as 128kbps MP3s do not magically sound better when converted to lossless FLAC. Information cannot just be added from nowhere. Adding a dither is the same as adding white noise to an audio file, it reduces the signal-to-noise and makes the image/audio worse.
  4. Compression artefacts result from the codec, not the bit depth. Again it's the same mistake as the filesize comparison. An 8-bit source recompressed as 10-bit doesn't have less compression artefacts, it will have an equal amount or more artefacts, depending on the conversion.

So congratulations, you all drank the kool-aid.

@doyouevenunderstand

This comment has been minimized.

Copy link

commented Aug 14, 2017

TL;DR IF THE SOURCE IS 8-BIT, CONVERTING IT TO 10-BIT WON'T MAKE THE OUTPUT SMALLER OR BETTER. The gains in filesize and image quality are all due to the fact that the comparison is between a modern codec and a very old codec. DO NOT DITHER YOUR ENCODES as you are simply adding noise. If the source has banding, get a better source, or check if perhaps your display is uncalibrated and the actual cause of the banding.

@Brainiarc7

This comment has been minimized.

Copy link

commented Feb 15, 2018

Author: Delete this misleading nonsense.

@Minatory

This comment has been minimized.

Copy link

commented Jul 26, 2018

Actually I am now trying to edit videos on my iPad for the first time and do some color corrections - Prores is basically out of the question, so I guess this 10-bit h264 is pretty neat. Those colors definitely wont be useless, in fact, 10-bit h264 is quite a good compromise between highend and lowend when it comes to the video codecs.

@TheRumbleFish

This comment has been minimized.

Copy link

commented Nov 6, 2018

Good lord am I one confused idiot/noob lol.
I don't know - BUT - As soon as I first saw an anime in h265 10bit I CLEARLY saw right away the difference in color and beauty.
I just have no idea what is going on at all.
So what kind of source do you need in order to get the 'beauty' of 10bit 265? The straight from blu-ray only? Encoding something from h464 will not work?

@antifa-ev

This comment has been minimized.

Copy link

commented Dec 11, 2018

@rpgdev @aslian You should better delete your misleading nonsense posts. Keeping them here destroys the whole rational atmosphere.

@rpgdev

This comment has been minimized.

Copy link

commented Dec 13, 2018

@rpgdev @aslian You should better delete your misleading nonsense posts. Keeping them here destroys the whole rational atmosphere.

The only one not being rational here is the one person not putting forward an actual argument (you), deciding to launch personal attacks instead. Next time try to keep it "rational" and post something of substance.

@Mereck13579

This comment has been minimized.

Copy link

commented Jan 14, 2019

Okay I stumbled upon this thread while looking for a decent way to re-encode 10-bits h264 to 8-bits h264. Hi10 has basically been around for almost a decade now (you can find some XBMC thread from 2011 talking about hi10 support) and some groups still insist on it being great for whatever reason. I kinda gave up trying to reason them, and hardware support is still non-existent (and probably won't be, ever).
As my hardware sometimes isn't sufficient enough for software decoding without throwing an unnecessary amount of cash into it, I'm now looking into batch re-encoding those hi10 releases into hi8 without any noticeable loss (since I guess true lossless is basically impossible without inflating file size considerably). Does anyone got any pointer / recommendation (encoding settings most notably) on that? (best would be something running on linux but I'm not gonna be nitpicky if a windows only solution is put forward)

@HyerrDoktyer

This comment has been minimized.

Copy link

commented Jan 17, 2019

@Mereck13579
x264 or ffmpeg would be fine, although I fundamentally disagree with what you're trying to do here; it would be best to just download 8-bit releases in the first place. The major problem I see here is that re-encoding files will result in a noticeably worse PQ and not to mention encoding time would also be a significant factor. Here is a very basic x264 string that should work for most animation:

--output-depth 8 --crf 15.0 --preset veryslow --profile high --level 4.1 --threads auto --no-mbtree --deblock=-2,-2 --aq-strength 0.7 --output
(pipe this through vapoursynth if you want to do any filtering/resizing)

This will be pretty slow, you could potentially speed it up by lowering --crf (and therefor increasing bitrate), --crf 0 is effectively a lossless file (from a lossy source if you're transcoding Hi10p releases). You could also consider using nvenc if you have a nvidia GPU, it's extremely fast compared to cpu-based encoding (over 20x faster on my machine) however the end result won't be as good as x264 at the same bitrate (I'm unsure of AMD has an equivalent).
Just a side note, there's a huge amount of misinformation in this thread; much of what has been proposed here (such as avoiding banding, reducing filesize) can be tested and verified easily by anyone, even if they lack the knowledge to test it for themselves you can simply look at x265 which has taken many elements from x264 (including 10-bit encoding (and even higher)) for themselves; and are reaping the rewards.

@Mereck13579

This comment has been minimized.

Copy link

commented Feb 1, 2019

@HyerrDoktyer I totally agree with you on the 8 bit releases... when they are available. I've recently tried to catch up on some 'older' shows (> 3 years old) and some of them (rare but not 0) only have 10 bit releases now. It's only for those that I'll bother doing that.
Also on x265, it appears it might become widely supported by manufacturers so I guess this won't be a problem in the future as we'll have hardware acceleration for it, but I can't expect those shows I was referring to previously to be converted to that format anytime at all unfortunately.

@liuqx0717

This comment has been minimized.

Copy link

commented Mar 14, 2019

I don't think it will worth it if you convert 8-bit video to 10-bit video before encoding. It doesn't reduce file size as is shown in my experiment:

QP10, 8bit:       size = 78,934,906 bytes       SSIM = 0.963060
QP10, 10bit:      size = 294,103,911 bytes      SSIM = 0.964137
QP20, 10bit:      size = 95,149,689 bytes       SSIM = 0.962735

It seems that converting to 10-bit will consume larger disk space in order to get the same quality. So using 8-bit encoding directly may be better.

I did another experiment, which indicates the same:

QP25, 8bit:       size = 11,492,225 bytes       SSIM = 0.958648
QP36, 10bit:      size = 12,848,844 bytes       SSIM = 0.958160

The command I use to generate the test files:

ffmpeg -i test-4kh264.mp4 -vcodec libx264 -qp 10 qp10-x264.mp4
ffmpeg -i test-4kh264.mp4 -vf format=yuv420p10le -vcodec libx264 -qp 10 qp10-10bit-x264.mp4
ffmpeg -i test-4kh264.mp4 -vf format=yuv420p10le -vcodec libx264 -qp 20 qp20-10bit-x264.mp4

ffmpeg -i test-4kh264.mp4 -vcodec libx264 -qp 25 qp25-x264.mp4
ffmpeg -i test-4kh264.mp4 -vf format=yuv420p10le -vcodec libx264 -qp 36 qp36-10bit-x264.mp4
@artyuum

This comment has been minimized.

Copy link

commented May 7, 2019

+1 for the introduction, it gave me a solid laugh, couldn't agree more.

For all those who haven’t heard of it already, here’s a quick rundown about the
newest trend in making our encodes unplayable on even more systems: So-called
high-bit-depth H.264. So, why another format, and what makes this stuff
different from what you know already?

@Barafu

This comment has been minimized.

Copy link

commented Aug 16, 2019

Some people here should not make fools of themselves in bold capslock. I am not knowledgeable enough to verify if the article is right or not, but I know for sure that comparing lossy audio with video is like comparing Norwegian poetry with reflecting tape for traffic cones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.