Tutorial: https://www.youtube.com/watch?v=hIqMrPTeGTc
Paste the below code in your browser console (F12 > Console):
(()=>{
markAllVideosAsNotBeingInteresting({
iterations: 1
});
})();
from pathlib import Path | |
from tempfile import NamedTemporaryFile | |
import numpy as np | |
import pandas as pd | |
from google.cloud.bigquery import Client, SchemaField | |
def main(): |
Tutorial: https://www.youtube.com/watch?v=hIqMrPTeGTc
Paste the below code in your browser console (F12 > Console):
(()=>{
markAllVideosAsNotBeingInteresting({
iterations: 1
});
})();
This is a rough outline on how to setup altserver-linux
on the 🍓🍰. Wifi refreshing is enabled through the use of netmuxd
, which acts as a proxy from AltServer
to the iDevice (replaces/enhances usbmuxd
).
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.