Skip to content

Instantly share code, notes, and snippets.

@OrionReed
OrionReed / dom3d.js
Last active June 30, 2024 04:11
3D DOM viewer, copy-paste this into your console to visualise the DOM topographically.
// 3D Dom viewer, copy-paste this into your console to visualise the DOM as a stack of solid blocks.
// You can also minify and save it as a bookmarklet (https://www.freecodecamp.org/news/what-are-bookmarklets/)
(() => {
const SHOW_SIDES = false; // color sides of DOM nodes?
const COLOR_SURFACE = true; // color tops of DOM nodes?
const COLOR_RANDOM = false; // randomise color?
const COLOR_HUE = 190; // hue in HSL (https://hslpicker.com)
const MAX_ROTATION = 180; // set to 360 to rotate all the way round
const THICKNESS = 20; // thickness of layers
const DISTANCE = 10000; // ¯\\_(ツ)_/¯
@Artefact2
Artefact2 / README.md
Last active June 25, 2024 19:00
GGUF quantizations overview

Which GGUF is right for me? (Opinionated)

Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggerganov/llama.cpp#5962

In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.

llama.cpp feature matrix

See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix

The target audience is people who are familiar with Urbit's architecture, though not necessarily much of its code.

Plunder and Urbit

As some of you already know, i recently left my job as a core dev for the Urbit Foundation to work on a similar system called Plunder. Plunder was created in 2020 by two former Tlon employees, after their proposal for a new version of Nock was rejected. They have since reworked that significantly and built a reference implementation of their own system. You can follow its continued development on its mailing list.

I've known about Plunder for quite some time now, but their recently released demo -- in which the system is used to serve a 70 GB dataset, complete with metadata and searchable -- made me feel the need to explore it again and in greater detail. Doing this with my personal server doesn't feel like a big ask, but there is currentl

@NaxAlpha
NaxAlpha / long_gpt.py
Last active October 15, 2023 11:21
Training script for LongGPT; Fine-tunes GPT-2 (335M) on The Pile Dataset with a context size of 8k tokens. (requires > 16GB RAM)
import time
from contextlib import suppress
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cuda as cuda
from torch.utils.data import DataLoader, IterableDataset
@cedrickchee
cedrickchee / meta-llama-guide.md
Created March 12, 2023 11:37
Meta's LLaMA 4-bit chatbot guide for language model hackers and engineer

info 9-3-23 Added 4bit LLaMA install instructions for cards as small as 6GB VRAM! (See "BONUS 4" at the bottom of the guide)

warning 9-3-23 Added Torrent for HFv2 Model Weights, required for ooga's webUI, Kobold, Tavern and 4bit (+4bit model)! Update ASAP!

danger 11-3-23 There's a new torrent version of the 4bit weights called "LLaMA-HFv2-4bit". The old "LLaMA-4bit" torrent may be fine. But if you have any issues with it, it's recommended to update to the new 4bit torrent or use the decapoda-research versions off of HuggingFace or produce your own 4bit weights. Newer Torrent Link or [Newer Magnet Link](magnet:?xt=urn:btih:36945b5958b907b3ab69e963ba0de1abdf48c16c&dn=LLaMA-HFv2-4bit&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt2.archive.org%3a696

@firexcy
firexcy / readme.md
Last active May 20, 2024 18:19
DIY a Rewind.ai

This Gist provides a solution to periodically capture screenshots of your Mac, and create therefrom a searchable PDF archive so that you can always get an answer to the “what, when, and where” questions about your usages.

To use these scripts:

  1. Download the shell script rewind, then:
    1. put it under ~/bin (or other fixed path you prefer);
    2. execute
@WolfwithSword
WolfwithSword / external_spool.yaml
Last active April 26, 2024 05:57
Bambu X1C + NodeRed HomeAssistant YAML Snippet
type: picture-elements
view_layout:
column: 1
elements:
- type: custom:config-template-card
entities:
- sensor.{HA_PRINTER_DEVICE_NAME}_vt_tray
element:
type: state-icon
entity: sensor.{HA_PRINTER_DEVICE_NAME}_vt_tray
@binji
binji / notes.md
Created November 25, 2022 05:14
Compiling LLVM/Clang for Wasm notes
  • How to Cross Compile LLVM: https://llvm.org/docs/HowToCrossCompileLLVM.html
  • Building LLVM with CMake: https://llvm.org/docs/CMake.html
  • Hints from wasi-sdk Makefile: https://github.com/CraneStation/wasi-sdk/blob/master/Makefile
  • Try compiling natively (needed for llvm-tblgen and clang-tblgen)
    • cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="X86;WebAssembly" -DLLVM_ENABLE_PROJECTS="lld;clang" ../llvm
  • Try building LLVM with WASI:
  • cmake -G Ninja -DCMAKE_AR=”/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-ar” -DCMAKE_RANLIB=”/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-ranlib” -DCMAKE_C_COMPILER="/usr/local/google/home/binji/dev/wasi-sdk-5.0/opt/wasi-sdk/bin/clang" -DCMAKE_CXX_COMPILER="/usr/local/google/home/binji/dev/wasi-sdk-5.0/opt/wasi-sdk/bin/clang++" -DCMAKE_CROSSCOMPILING=True -DCMAKE_INSTALL_PREFIX=/usr/local/google/home/binji/dev/wasi-clang -DLLVM_TABLEGEN=/usr/local/google/home/binji/dev/llvm-project/build/bin/llvm-tblgen -DCLANG_TABLEGEN=/
@primus852
primus852 / cuda_11.7_installation_on_Ubuntu_22.04
Last active June 19, 2024 05:51 — forked from Mahedi-61/cuda_11.8_installation_on_Ubuntu_22.04
Instructions for CUDA v11.7 and cuDNN 8.5 installation on Ubuntu 22.04 for PyTorch 1.12.1
#!/bin/bash
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
###
### to verify your gpu is cuda enable check
@moyix
moyix / codegen_gptj_convert.py
Created July 22, 2022 19:33
Convert a SalesForce CodeGen model's weights to plain GPT-J
#!/usr/bin/env python
import argparse
import torch
from transformers import GPTJForCausalLM, GPTJConfig
# Note: these need the git version of Transformers as of 7/22/2022
from transformers import CodeGenTokenizer, CodeGenForCausalLM
from transformers import CODEGEN_PRETRAINED_MODEL_ARCHIVE_LIST
parser = argparse.ArgumentParser('Convert SalesForce CodeGen model to GPT-J')