You've probably seen most of them already. Check the linked clips, and atleast one episode before making the final decision to add/remove it from your list.
| <%* | |
| /* | |
| You need to install yt-dlp and jq to use this template: | |
| $ brew install yt-dlp jq # on macOS | |
| You need to define user function in Templater plugin settings named "ytmeta" with the following command: | |
| /opt/homebrew/yt-dlp -j "https://www.youtube.com/watch?v=${id}" | /opt/homebrew/jq "${query}" | |
| replace /opt/homebrew with your path to yt-dlp and jq |
-
I hope, Cecily, I shall not offend you if I state quite frankly and openly that you seem to me to be in every way the visible personification of absolute perfection. - Oscar Wilde
-
I couldn't help it; I can resist everything but tempation - Oscar Wilde
| from typing import Callable, Union | |
| import numpy as np | |
| import pandas as pd | |
| from sklearn.feature_extraction.text import CountVectorizer | |
| from sklearn.utils.validation import check_X_y, check_array | |
| from IPython.display import display | |
| array_like = Union[list, np.ndarray] | |
| matrix_like = Union[np.ndarray, pd.DataFrame] |
| #Import needed packages | |
| import torch | |
| import torch.nn as nn | |
| from torchvision.datasets import CIFAR10 | |
| from torchvision.transforms import transforms | |
| from torch.utils.data import DataLoader | |
| from torch.optim import Adam | |
| from torch.autograd import Variable | |
| import numpy as np |
| import numpy as np | |
| import multiprocessing as multi | |
| def chunks(n, page_list): | |
| """Splits the list into n chunks""" | |
| return np.array_split(page_list,n) | |
| cpus = multi.cpu_count() | |
| workers = [] | |
| page_list = ['www.website.com/page1.html', 'www.website.com/page2.html' |
| # library to generate user agent | |
| from user_agent import generate_user_agent | |
| # generate a user agent | |
| headers = {'User-Agent': generate_user_agent(device_type="desktop", os=('mac', 'linux'))} | |
| #headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux i686 on x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.63 Safari/537.36'} | |
| page_response = requests.get(page_link, timeout=5, headers=headers) |
| """ A function that can read MNIST's idx file format into numpy arrays. | |
| The MNIST data files can be downloaded from here: | |
| http://yann.lecun.com/exdb/mnist/ | |
| This relies on the fact that the MNIST dataset consistently uses | |
| unsigned char types with their data segments. | |
| """ |
Essentially just copy the existing video and audio stream as is into a new container, no funny business!
The easiest way to "convert" MKV to MP4, is to copy the existing video and audio streams and place them into a new container. This avoids any encoding task and hence no quality will be lost, it is also a fairly quick process and requires very little CPU power. The main factor is disk read/write speed.
With ffmpeg this can be achieved with -c copy. Older examples may use -vcodec copy -acodec copy which does the same thing.
These examples assume ffmpeg is in your PATH. If not just substitute with the full path to your ffmpeg binary.
my database had 72k annotations at the time I ran these benchmarks, here's the result:
$ python scripts/batch_bench.py conf/development-app.ini dumb
Memory summary: start
types | # objects | total size
=========== | =========== | ============
dict | 13852 | 12.46 MB
frozenset | 349 | 11.85 MB
VM: 327.29Mb