Skip to content

Instantly share code, notes, and snippets.

View johndpope's full-sized avatar

John D. Pope johndpope

View GitHub Profile
@UnleashTheCode
UnleashTheCode / pimp.sh
Last active January 25, 2024 01:52
A scripts to improve the quality of life on your kali
#!/bin/bash
ls ~/.zshrc_copy || cp ~/.zshrc ~/.zshrc_copy
# Abort on errors
set -e
# Function to install a package only if it's not already installed
function install_if_needed() {
local pkg="$1"
if ! dpkg -l "$pkg" &>/dev/null ; then
# original post: https://rentry.org/sd-loopback-wave
# original author: https://rentry.org/AnimAnon
import os
import platform
import numpy as np
from tqdm import trange
import math
import subprocess as sp
import string
import random
@shawwn
shawwn / llama-dl-dmca.md
Last active April 5, 2023 02:35
I prompted GPT-4 to draft a DMCA counterclaim to Meta's DMCA against llama-dl: https://github.com/github/dmca/blob/master/2023/03/2023-03-21-meta.md

Prompt

Meta has issued a DMCA copyright claim against llama-dl, a GitHub repository, for distributing LLaMA, a 65-billion parameter language model. Here's the full text of the DMCA claim. Based on this, draft a DMCA counterclaim on the basis that neural networks trained on public data are not copyrightable.

--

VIA EMAIL: Notice of Claimed Infringement via Email
URL: http://www.github.com
DATE: 03/20/2023

@cedrickchee
cedrickchee / meta-llama-guide.md
Created March 12, 2023 11:37
Meta's LLaMA 4-bit chatbot guide for language model hackers and engineer

info 9-3-23 Added 4bit LLaMA install instructions for cards as small as 6GB VRAM! (See "BONUS 4" at the bottom of the guide)

warning 9-3-23 Added Torrent for HFv2 Model Weights, required for ooga's webUI, Kobold, Tavern and 4bit (+4bit model)! Update ASAP!

danger 11-3-23 There's a new torrent version of the 4bit weights called "LLaMA-HFv2-4bit". The old "LLaMA-4bit" torrent may be fine. But if you have any issues with it, it's recommended to update to the new 4bit torrent or use the decapoda-research versions off of HuggingFace or produce your own 4bit weights. Newer Torrent Link or [Newer Magnet Link](magnet:?xt=urn:btih:36945b5958b907b3ab69e963ba0de1abdf48c16c&dn=LLaMA-HFv2-4bit&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt2.archive.org%3a696

@cloudboratory
cloudboratory / python-cryptography-aws-lambda-layer.sh
Created March 3, 2023 14:32
Allows creation of an aws lambda layer using lambci/lambda to install Python's cryptography package
mkdir -p python/lib/python3.8/site-packages \
echo "cryptography" > requirements.txt \
sudo docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"
@jschoormans
jschoormans / equirectangular.py
Created December 8, 2022 23:08
generate 3D panorama views with stable diffusion
# %%
import replicate
model = replicate.models.get("prompthero/openjourney")
version = model.versions.get("9936c2001faa2194a261c01381f90e65261879985476014a0a37a334593a05eb")
PROMPT = "mdjrny-v4 style 360 degree equirectangular panorama photograph, Alps, giant mountains, meadows, rivers, rolling hills, trending on artstation, cinematic composition, beautiful lighting, hyper detailed, 8 k, photo, photography"
output = version.predict(prompt=PROMPT, width=1024, height=512)
# %%
# download the iamge from the url at output[0]
import requests
const fs = require("fs");
const useragentFromSeed = require("useragent-from-seed");
const axios = require("axios").default;
function getString(start, end, all) {
const regex = new RegExp(`${start}(.*?)${end}`);
const str = all;
const result = regex.exec(str);
return result[1];
}
@FelixZY
FelixZY / supabase_api_auth.sql
Last active April 18, 2024 08:04
How to configure Supabase (https://supabase.com/) to generate and accept API tokens.
-- Token Based API Access for Supabase
--
-- How to configure Supabase (https://supabase.com/) to generate and accept API tokens.
--
-- (c) 2022 Felix Zedén Yverås
-- Provided under the MIT license (https://spdx.org/licenses/MIT.html)
--
-- Disclaimer: This file is formatted using pg_format. I'm not happy with the result but
-- prefer to follow a tool over going by personal taste.
--
@shawwn
shawwn / JAX_compliation_cache.md
Last active January 2, 2024 15:46
JAX persistent compilation cache

JAX released a persistent compilation cache for TPU VMs! When enabled, the cache writes compiled JAX computations to disk so they don’t have to be re-compiled the next time you start your JAX program. This can save startup time if any of y’all have long compilation times.

First upgrade to the latest jax release:

pip install -U "jax[tpu]>=0.2.18" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html

Then use the following to enable the cache in your jax code:

from jax.experimental.compilation_cache import compilation_cache as cc
@abodacs
abodacs / jserv_hf_fast.py
Created July 5, 2021 09:38 — forked from kinoc/jserv_hf_fast.py
Run HuggingFace converted GPT-J-6B checkpoint using FastAPI and Ngrok on local GPU (3090 or Titan)
# So you want to run GPT-J-6B using HuggingFace+FastAPI on a local rig (3090 or TITAN) ... tricky.
# special help from the Kolob Colab server https://colab.research.google.com/drive/1VFh5DOkCJjWIrQ6eB82lxGKKPgXmsO5D?usp=sharing#scrollTo=iCHgJvfL4alW
# Conversion to HF format (12.6GB tar image) found at https://drive.google.com/u/0/uc?id=1NXP75l1Xa5s9K18yf3qLoZcR6p4Wced1&export=download
# Uses GDOWN to get the image
# You will need 26 GB of space, 12+GB for the tar and 12+GB expanded (you can nuke the tar after expansion)
# Near Simplest Language model API, with room to expand!
# runs GPT-J-6B on 3090 and TITAN and servers it using FastAPI
# change "seq" (which is the context size) to adjust footprint