Skip to content

Instantly share code, notes, and snippets.

@thomwolf
thomwolf / fast_speech_text_speech.py
Last active May 9, 2024 11:05
speech to text to speech
""" To use: install LLM studio (or Ollama), clone OpenVoice, run this script in the OpenVoice directory
git clone https://github.com/myshell-ai/OpenVoice
cd OpenVoice
git clone https://huggingface.co/myshell-ai/OpenVoice
cp -r OpenVoice/* .
pip install whisper pynput pyaudio
"""
from openai import OpenAI
import time
@veekaybee
veekaybee / normcore-llm.md
Last active June 29, 2024 03:29
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@Birch-san
Birch-san / llama-convert.md
Created June 1, 2023 18:24
Converting LLaMA model weights to huggingface format + safetensors

Loading LLaMA via Huggingface + Safetensors, with 4-bit quantization

Let's say we're trying to load a LLaMA model via AutoModelForCausalLM.from_pretrained with 4-bit quantization in order to inference from it:

python -m generate.py

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, LlamaTokenizerFast, LlamaForCausalLM
import transformers
@Birch-san
Birch-san / fine-tuning.md
Last active December 27, 2023 17:24
Fine-tuning LLaMA-7B on ~12GB VRAM with QLoRA, 4-bit quantization

Fine-tuning LLaMA-7B on ~12GB VRAM with QLoRA, 4-bit quantization

nvidia-smi said this required 11181MiB, at least to train on the sequence lengths of prompt that occurred initially in the alpaca dataset (~337 token long prompts).
You can get this down to about 10.9GB if (by modifying qlora.py) you run torch.cuda.empty_cache() after PEFT has been applied to your loaded model and before you begin training.

Setup

All instructions are written assuming your command-line shell is bash.

Clone repository:

#This module is meant for direct use only. For API-usage please check SDA-TRAINER.
#Based off NVIDIA's demo
import argparse
from threads.trt.models import CLIP, UNet, VAE
import os
import onnx
import torch
from diffusers import UNet2DConditionModel, AutoencoderKL
from transformers import CLIPTextModel
from threads.trt.utilities import Engine
from huggingface_hub import hf_hub_download
from flax.serialization import msgpack_restore
from safetensors.flax import save_file
import numpy as np
filename = hf_hub_download("gpt2", filename="flax_model.msgpack")
with open(filename, "rb") as f:
data = f.read()
flax_weights = msgpack_restore(data)
@jschoormans
jschoormans / equirectangular.py
Created December 8, 2022 23:08
generate 3D panorama views with stable diffusion
# %%
import replicate
model = replicate.models.get("prompthero/openjourney")
version = model.versions.get("9936c2001faa2194a261c01381f90e65261879985476014a0a37a334593a05eb")
PROMPT = "mdjrny-v4 style 360 degree equirectangular panorama photograph, Alps, giant mountains, meadows, rivers, rolling hills, trending on artstation, cinematic composition, beautiful lighting, hyper detailed, 8 k, photo, photography"
output = version.predict(prompt=PROMPT, width=1024, height=512)
# %%
# download the iamge from the url at output[0]
import requests
@zer0TF
zer0TF / convert_to_safe.py
Created November 28, 2022 04:53
Convert all CKPT files to SAFETENSOR files in a directory
# Got a bunch of .ckpt files to convert?
# Here's a handy script to take care of all that for you!
# Original .ckpt files are not touched!
# Make sure you have enough disk space! You are going to DOUBLE the size of your models folder!
#
# First, run:
# pip install torch torchsde==0.2.5 safetensors==0.2.5
#
# Place this file in the **SAME DIRECTORY** as all of your .ckpt files, open a command prompt for that folder, and run:
# python convert_to_safe.py
@Narsil
Narsil / pure_torch.py
Created November 10, 2022 15:06
Loading a safetensors file with pure torch only
import mmap
import torch
import json
import os
from huggingface_hub import hf_hub_download
def load_file(filename, device):
with open(filename, mode="r", encoding="utf8") as file_obj:
with mmap.mmap(file_obj.fileno(), length=0, access=mmap.ACCESS_READ) as m:
@harubaru
harubaru / wd1-4.md
Last active September 11, 2023 04:12

Waifu Diffusion 1.4 Overview

An image generated at resolution 512x512 then upscaled to 1024x1024 with Waifu Diffusion 1.3 Epoch 7.

Goals

  • Improving image generation at different aspect ratios using conditional masking during training. This will allow for the entire image to be seen during training instead of center cropped images, which will allow for better results when generating full body images, portraits, and improving the composition.
  • Expanded the input context from 77 tokens to 231 tokens or perhaps to an unlimited amount of tokens. Out of 77 tokens for input, only 75 are useable. This does not give nearly enough room for complex prompting that requires a lot of detail.