Skip to content

Instantly share code, notes, and snippets.

@anujkhare
anujkhare / clone_all.sh
Created December 16, 2018 06:29
Clone all the repositories listed in a text file given as input: useful when setting up a dev env with a lot of repos!
echo "If your SSH key has a passphrase, you might need to enter it each time. Consider using ssh-agent or removing the passphrase";
while read p; do
echo "$p";
git clone "$p";
done < $1
@sundowndev
sundowndev / GoogleDorking.md
Last active June 10, 2024 12:50
Google dork cheatsheet

Google dork cheatsheet

Search filters

Filter Description Example
allintext Searches for occurrences of all the keywords given. allintext:"keyword"
intext Searches for the occurrences of keywords all at once or one at a time. intext:"keyword"
inurl Searches for a URL matching one of the keywords. inurl:"keyword"
allinurl Searches for a URL matching all the keywords in the query. allinurl:"keyword"
intitle Searches for occurrences of keywords in title all or one. intitle:"keyword"
@Jabarabo
Jabarabo / githubpull.md
Last active May 5, 2024 10:16
Gist of a stolen gist
@kuang-da
kuang-da / popos-nvidia-docker.md
Last active February 15, 2024 21:04
[Install nvidia-docker2 In Pop!_OS]#popos

Introduction

This gist is a note about install nvidia-docker in Pop!_OS 20.10. nvidia-docker is used to help docker containers compute on GPU.

The basic installcation is in Nvidia's offical documentation. But there are a few tweaks to make it work on Pop!_OS 20.10.

Setting up Docker

No surprise. Follow the offical documentaion should work.

Setting up NVIDIA Container Toolkit

@whjms
whjms / kobold-8bit.md
Last active April 7, 2023 16:35
Instructions for running KoboldAI in 8-bit mode

Running KoboldAI in 8-bit mode

tl;dr use Linux, install bitsandbytes (either globally or in KAI's conda env, add load_in_8bit=True, device_map="auto" in model pipeline creation calls)

Many people are unable to load models due to their GPU's limited VRAM. These models contain billions of parameters (model weights and biases), each of which is a 32 (or 16) bit float. Thanks to the hard work of some researchers [1], it's possible to run these models using 8-bit numbers, which halves the required amount of VRAM compared to running in half-precision. E.g. if a model requires 16GB of VRAM, running with 8-bit inference only requires 8GB.

This guide was written for KoboldAI 1.19.1, and tested with Ubuntu 20.04. These instructions are based on work by Gmin in KoboldAI's Discord server, and Huggingface's efficient LM inference guide.

Requirements