Skip to content

Instantly share code, notes, and snippets.

@logiclrd
Last active May 26, 2023 06:06
Show Gist options
  • Star 19 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save logiclrd/287140934c12bed1fd4be75e8624c118 to your computer and use it in GitHub Desktop.
Save logiclrd/287140934c12bed1fd4be75e8624c118 to your computer and use it in GitHub Desktop.
FFmpeg: Ultimate film grain
ffmpeg -i "HD Splice 1080p No Grain.mkv" -i "HD Splice 1080p No Grain.mkv" -filter_complex "
color=black:d=3006.57:s=3840x2160:r=24000/1001,
geq=lum_expr=random(1)*256:cb=128:cr=128,
deflate=threshold0=15,
dilation=threshold0=10,
eq=contrast=3,
scale=1920x1080 [n];
[0] eq=saturation=0,geq=lum='0.15*(182-abs(75-lum(X,Y)))':cb=128:cr=128 [o];
[n][o] blend=c0_mode=multiply,negate [a];
color=c=black:d=3006.57:s=1920x1080:r=24000/1001 [b];
[1][a] alphamerge [c];
[b][c] overlay,ass=Subs.ass"
-c:a copy -c:v libx264 -tune grain -preset veryslow -crf 12 -y Output-1080p-Grain.mkv
@logiclrd
Copy link
Author

logiclrd commented Apr 19, 2019

Methodology:

  1. Create uniform film grain noise. Initially, every pixel is randomized to a desaturated gray field of noise, then the deflate and dilation filters are used to subtly spread out the noise so that it isn't just lined up exactly with pixels. This is done at twice the video resolution, and then scaled down, because the deflate and dilation filters create features larger than we want. Scaling it down by 50% scales these features down too. EDIT: Be sure to give this source the same framerate as your video!

  2. Take an input of our source video and convert it to a luma plane that is brightest in the areas most obvious to the viewer. I experimentally determined this to be around luma 75, which seems to be where faces are in the first few minutes of my video source. This might require tweaking.

  3. Multiply the uniform noise field by the luma field, so that areas selected (by luma) for more noise have brighter noise, and areas far away from the desired luma are almost if not entirely black.

  4. Invert the output of this, so that areas where no noise is desired are pure white, and noise "pulls down" on the white. This is the final noise mask.

  5. Take a second input of the same source video and blend it with the final noise mask. Apply any other desired filters.

  6. Encode with settings that various searches of The Google have suggested are good for high-grain input:

  • -tune grain
  • -aq-strength 1.9 -- I don't know what this is but I found a page where someone suggested high aq-strength to preserve grain. :-)
  • -crf 12 -- we all have unlimited hard disk space, right?

This runs at about 0.7 FPS on my system, but hey, I've got a few weeks until the showing. :-)

@logiclrd
Copy link
Author

Update: Reducing the number of general expressions has doubled the speed. :-)

@logiclrd
Copy link
Author

Update: took out -aq-strength 1.9, because I did a comparison and couldn't tell the difference at CRF 12 between -tune grain's -aq-strength 0.5 and -aq-strength 1.9. I've just decided to go with what -tune grain suggests.

@logiclrd
Copy link
Author

If you want to play with this, you can tune the amount of grain that is applied by altering the 0.15* in the geq filter near the middle. This implementation always pulls down the brightness with the grain, so the grainier you make it, the darker you make it -- you may want to add another filter to push the brightness back up a bit in that case.

@logiclrd
Copy link
Author

Here's a walk-through of the computations:

  1. It starts with white noise:

white noise

  1. Then it uses the "deflate" and "dilation" filters to cause certain features to expand out to multiple pixels:

made clumpy with deflate and dilation

The effect is pretty subtle but you can see that there are a few larger "blobs" of white and black in amongst the noise. This means that the features of the noise aren't just straight-up single pixels any more.

  1. Then, that image gets halved in resolution, because it was being rendered at twice the resolution of the target video.

scaled down

The highest-resolution detail is now softened, and the clumps of pixels are reduced in size to be 1-2 pixels in size. So, this is the noise plane.

Then, I take the source video and do some processing on it.

original frame

  1. Desaturate:

desaturated

  1. Filter luminance so that the closer an input pixel was to luminance level 75 (arrived at experimentally), the brighter the pixel is. If the input pixel was darker or brighter, the output pixel is uniformly darker. This creates "bands" of brightness where the luminance level is close to 75.

luminance band filter

  1. This is then scaled down, and this is where the level of noise is "tuned". This band selection means that we will be adding noise specifically in the areas of the frame where it will be most noticed. Not adding noise in other areas leaves more bits to encode the noise.

luminance band filter, scaled

  1. This scaled mask is then applied to the previously-computed noise. In this screenshot, I've removed the tuning so that the noise is easily visible:

masked noise, unscaled

The areas not selected by the band filter are greatly scaled down and are essentially black; the noise variation fades to nothing.

Here's what it looks like with a scaling factor of 0.32 -- pretty subtle:

masked noise, scaled

  1. I then invert this image, so that the parts with no noise are solid white, and then areas with noise pull down slightly from the white:

film grain alpha channel

  1. Finally, I pull another copy of the same source video, apply this computed image to it as an alpha channel and overlay it on black, so that the film grain dots, which are slightly less white, become slightly darker pixels.

final image

The effect is pretty subtle, hard to see in a still like that when it's not moving, but if you tune the noise way up, you can get frames like this:

final image, exaggerated

@chkuendig
Copy link

Thanks for this great guide! I tried this out, but I ended up with some weird effect where the grain is not in sync with the image, especially when the camera pans it's quite visible (look at the arch):

vlcsnap-2020-01-17-20h49m20s207

Any idea what might be causing this?

@logiclrd
Copy link
Author

Hmm, is it possible that you have a resize or crop filter that's not being applied to every input the video is sourced with?

@chkuendig
Copy link

I figured it out - for future reference: It was the framerate, you used 24000/1001 for the grain, while my source was 25fps.

@logiclrd
Copy link
Author

Ah, right, that makes sense! Thanks for writing that out for future readers. Always make sure the framerate is the same for all inputs :-)

@kocoten1992
Copy link

Hello, I want to explore this art, but does add grain with this method increase video size much ? I'm looking for a way to add synthetic noise without much (or any at all) file size.

@logiclrd
Copy link
Author

logiclrd commented Sep 5, 2020

Noise is extremely difficult to encode well. As a rough approximation, video compression works by separating the signal out into different "frequencies" -- a gradual gradient in a background is low-frequency, while sharp edges and noise are high-frequency. Each band of frequencies is then encoded separately; low-frequency data requires very little bandwidth to encode reasonably well. High-frequency data, though, has a great deal of information for the same number of pixels. When you constrain a video encoder's bitrate, it is the high-frequency data that is most heavily affected. If you legitimately want noise at the pixel level throughout every frame, then you need to give the encoder lots of bits to work with, otherwise the noise will get filtered out, and will probably serve only to decrease the quality of the end result, because it may cause the boundaries between macroblocks to be less likely to match up.

I haven't experimented heavily with this. In my application, having a 17 GB file for 45 minutes of video is entirely no big deal. I encourage you to try different quality levels and see what happens to the noise. My settings are probably way overkill, I just set it high enough to be absolutely sure I wouldn't run into issues with the available bits constraining the noise in any visible way.

I've made a (contrived) example to demonstrate what I'm talking about. This animation switches between an original image that has noise at various levels, including per-pixel, and that same image saved and reloaded using frequency-domain compression:

noise-comparison

You can see that the compressed version has lost the finest detail of the noise. It's still "noisy", but that noise has a resolution much larger than a pixel, and in fact ends up being distractingly blocky because the compressor is being pushed past its limits with regard to the edges of blocks of compressed pixels matching up.

This is exactly what will happen in a video file if you add a lot of high-frequency noise to it but try to keep the filesize small.

@kocoten1992
Copy link

Thanks for the detailed answer, really appreciate that!

I find an alternative art: add noise at run time

For example:
using VLC player, we could Tools->Effects and Filters->Video Effects->Film Grain
For browsers: programmatically add noise via canvas api

(I'm working at a streaming firm - increasing filesize a lot is very out of question 😃).

One drawback about this approach is the band effect when encode video in 8-bit in low bitrate:
The feeling of banding effect (low quality) + film noise (high quality) feel really weird (like someone haven't take a shower in week put on a fragrant) - it could be mitigate if using 10-bit video encode - but currently not any browser support that, I really hope they do in the future.

banding

(sharing this if anyone looking for the same thing as me)

@logiclrd
Copy link
Author

logiclrd commented Sep 7, 2020

I suspect the only way to really achieve what you're looking for will be to add film grain only if the bitrate is high enough to eliminate macroblocking. But, perhaps judiciously applying a denoise filter before encoding could allow a lower bitrate to do a good job conveying smooth frames, and then that would be a suitable thing to add fake film grain to at playback time.

@HannesJo0139
Copy link

I am trying to beautify a tv series with this. But in every episode after 50 mins and 6 secs (movie length) ffmpeg throws "EOF timestamp not reliable" and from now on the grain is not changing anymore between the frames (like not temporal). Any idea what could cause this?

@RollingStar
Copy link

OP, what is your goal?

kocoten1992, this is similar to what AV1 is trying to do. I am unsure how well it works in real-world tests.

https://norkin.org/research/film_grain/index.html

@logiclrd
Copy link
Author

My goal is simulated film grain that looks more like the real thing than just per-pixel noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment