Dump existing data:
python3 manage.py dumpdata > datadump.json
Change settings.py to Postgres backend.
Make sure you can connect on PostgreSQL. Then:
/** | |
* Performs a greedy algorithm on the chunk vmaf data to produce a list of cq values that maximizes the average vmaf of | |
* the entire encode without going over the [SIZE_LIMIT]. | |
* | |
* This is a variation of the 0-1 knapsack problem. Due to the extra constraint of needing one of each chunk, the | |
* pseudo-polynomial algorithm cannot be applied. Instead, we make the following assumptions to reach a good local | |
* maximum. | |
* | |
* For any chunk c and cq value q, | |
* - vmaf(c, q - 1) > vmaf(c, q) and |
Binary Space Partitioning Window Manager = bspwm | |
Youtube Video: https://youtu.be/ZbXQUOwcH08 | |
bspwm install | |
pacman packages: | |
bspwm | |
sxhkd |
[package] | |
name = "unimportant_if_subsumed_by_setuptools" | |
version = "0.1.0" | |
authors = ["Your Name Here <your@email.com>"] | |
[lib] | |
name = "unimportant_if_subsumed_by_setuptools" | |
crate-type = ["cdylib"] | |
[dependencies.cpython] |
import vapoursynth as vs | |
import random | |
from os import mkdir | |
from os.path import exists, basename, dirname, splitext | |
import numpy as np | |
import cv2 as cv | |
import mvsfunc as mvs | |
core = vs.get_core() | |
encodeddir = r'' |