This is a quick guide to mounting a qcow2 disk images on your host server. This is useful to reset passwords, edit files, or recover something without the virtual machine running.
Step 1 - Enable NBD on the Host
modprobe nbd max_part=8
This is a compiled list of falsehoods programmers tend to believe about working with time.
Don't re-invent a date time library yourself. If you think you understand everything about time, you're probably doing it wrong.
" let g:python_host_prog = '/usr/bin/python2' | |
" let g:python3_host_prog = '/usr/bin/python3' | |
" ************************************* | |
" PLUGIN SECTION for Vim-Plug | |
" ************************************* | |
call plug#begin('~/.config/nvim/plugged') | |
" Make sure you use single quotes |
Please comment below if you have an update, e.g., with another networking-related dataset.
Note: I have moved this list to a proper repository. I'll leave this gist up, but it won't be updated. To submit an idea, open a PR on the repo.
Note that I have not tried all of these personally, and cannot and do not vouch for all of the tools listed here. In most cases, the descriptions here are copied directly from their code repos. Some may have been abandoned. Investigate before installing/using.
The ones I use regularly include: bat, dust, fd, fend, hyperfine, miniserve, ripgrep, just, cargo-audit and cargo-wipe.
#!/usr/bin/env bash | |
set -e | |
declare -i last_called=0 | |
declare -i throttle_by=4 | |
@throttle() { | |
local -i now=$(date +%s) | |
if (($now - $last_called > $throttle_by)) | |
then |
#!/bin/bash | |
# whisper-stream.sh | |
# | |
# Take a url supported by yt-dlp, dump 30-second segments to the current | |
# directory named by unix timestamp, and transcribe each segment using Whisper. | |
# | |
# example: TZ=Australia/Canberra ./whisper-stream.sh "https://..." | |
# | |
# The time displayed is the time when ffmpeg first opens the segment for |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.This is a living document. Everything in this document is made in good faith of being accurate, but like I just said; we don't yet know everything about what's going on.
On March 29th, 2024, a backdoor was discovered in xz-utils, a suite of software that