Skip to content

Instantly share code, notes, and snippets.

@brettpetch
Last active March 29, 2023 17:49
Show Gist options
  • Save brettpetch/d108d656c7a3877a484abce960b4fe09 to your computer and use it in GitHub Desktop.
Save brettpetch/d108d656c7a3877a484abce960b4fe09 to your computer and use it in GitHub Desktop.

Tuning Jellyfin for Large Libraries

Jellyfin, at least in its current state hasn't been great for media server

Database

It is no secret that Jellyfin's database has its shortcomings, espeically when trying to search large media collections. To remedy this, we'll be manually setting some PRAGMA values in the sqlite database.

While it is important to note that these optimizations may cause issues should sudden powerloss occur, it should be pretty safe to run them in an environment like a datacenter, which is where most of my gear resides.

First, we're going to stop jellyfin, then we'll navigate to the config directory for the application. Once inside, navigate to data/data and run the below.

Note: the following may be overridden by jellyfin enforced options, they are currently testing these settings in PR jellyfin/jellyfin#9044

While performing the below is somewhat useful, there are some options that are restricting performance built into the app itself. Until they figure out WAL/Vaccuum during DB Optimiztion & enforcing a maintenance schedule, it's unlikely this will make it to their production builds. jellyfin/jellyfin#1655

sqlite3 library.db
PRAGMA main.page_size = 4096;
PRAGMA main.cache_size=10000;
PRAGMA main.locking_mode=EXCLUSIVE;
PRAGMA main.synchronous=NORMAL;
PRAGMA main.temp_store = MEMORY;
pragma mmap_size = 30000000000;
pragma vacuum;
pragma optimize;

NGNIX

On the NGINX side, you can do a few things. To name a few:

Hosting the WEB-UI seperately from Jellyfin.

This will allow nginx to serve the webui seperately from the jf server, allowing for faster initial load times and reduces usage on the proxy_pass module.

To host this seperately, follow the below bash script

version=$(curl -fsSLI -o /dev/null -w %{url_effective} https://github.com/jellyfin/jellyfin-web/releases/latest | grep -o '[^/]*$')

git clone -b tags/${version} https://github.com/jellyfin/jellyfin-web /opt/jellyfin-web

cd /opt/jellyfin-web

npm install -g npm
npm install
npm run build:production

ln -s /opt/jellyfin-web/dist/ /srv/jellyfin-web

chown -R www-data: /srv/jellyfin-web

Next, add the following to your jellyfin block, assuming that you're running nginx on the metal.

  location /web/ {
      alias /srv/jellyfin-web/;
      try_files $uri $uri/ $uri.html @proxy;
      http2_push_preload on;      
   }

  location @proxy {
    proxy_pass http://$jellyfin:8096;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Protocol $scheme;
    proxy_set_header X-Forwarded-Host $http_host;
    # Disable buffering when the nginx proxy gets very resource heavy upon streaming
    proxy_buffering off;
  }

Ensure that you set the environment variable or flag to disable jellyfin serving its own webui:

# Docker Environment Var:
JELLYFIN_NOWEBCONTENT=true
# Flag for systemd
--nowebclient

Do a daemon-reload and restart jellyfin to ensure changes are written.

Implementing Pagespeed / Redis Caching

There is also a pretty handy nginx compiler that can add some performance tweaks. I would NOT recommend touching the quiche one though, as it has previously broken POST requests in the past from my experience. HMAC/TLS patches are useful, as is pagespeed. Backup nginx first!

nginx-autoinstall

To setup the pagespeed module, I'd suggest the following parameters, alongside a redis container bound to localhost.

# Basic Redis Service thru docker-compose
services:
  redis:
    ports:
      - 127.0.0.1:6379:6379
    image: redis:latest
    restart: always
  # In HTTP Block
    pagespeed on;
    pagespeed FileCachePath              "/var/cache/pagespeed/";
    pagespeed FileCacheSizeKb            204800;
    pagespeed FileCacheCleanIntervalMs   3600000;
    pagespeed FileCacheInodeLimit        500000;
    pagespeed LRUCacheKbPerProcess     8192;
    pagespeed LRUCacheByteLimit        32768;
    pagespeed DefaultSharedMemoryCacheKB 400000;
    pagespeed ShmMetadataCacheCheckpointIntervalSec 300;
    pagespeed HttpCacheCompressionLevel 9;        
    pagespeed RedisServer "127.0.0.1:6379";
    pagespeed EnableCachePurge on;
    pagespeed InPlaceResourceOptimization on;
    
    # In server block (top)
    pagespeed on;
    pagespeed FileCachePath /var/cache/nginx/jellyfin_pagespeed;
    pagespeed RewriteLevel OptimizeForBandwidth;
    pagespeed InPlaceResourceOptimization on;
    
    # In your server block (bottom)
    location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" {
      add_header "" "";
    }
    location ~ "^/pagespeed_static/" { }
    location ~ "^/ngx_pagespeed_beacon$" { }

Other Considerations

Additionally, if you are running on a high bandwidth connection but seem to be getting buffering issues, you could try Reading this Cloudflare Article about how to enable bbr (an agressive congestion control technique) and checkout the Fasterdata Articles on tuning the kernel networking stack.

If you wish to further your understanding, Arch Wiki has some valuable input that can further your understanding on which way parameters might need to go to provide a better result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment