Skip to content

Instantly share code, notes, and snippets.

View manniru's full-sized avatar

MUHAMMAD MANNIR AHMAD manniru

View GitHub Profile
@glowinthedark
glowinthedark / nllb200_translate.py
Last active June 6, 2024 11:14
Text translation with facebook/nllb-200-3.3B model
#!/usr/bin/env python3
# Dependencies
# =============================
# pip install nltk transformers
import argparse
import sys
from pathlib import Path
@shashankdeshpande
shashankdeshpande / README.md
Created July 21, 2023 11:55
Run Llama-2 on your local machine's CPU

Create environment

conda create -n llama2 python=3.9 
conda activate llama2

Install required libraries

# langchain
pip install langchain
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active July 1, 2024 05:32
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active July 12, 2024 06:54
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@BillRaymond
BillRaymond / README.md
Last active July 16, 2024 21:17
Run AUTOMATIC1111's Stable Diffusion Web UI in Docker and VSC on Mac M-series chips

Run AUTOMATIC1111's Stable Diffusion Web UI on a Mac M1 using Docker and Visual Studio Code

👉 This is for testing purposes. It is using Docker. It will be slow and may crash. I suggest you read the FAQ and Docker Pre-requisites sections before jumping into this.

Stable Diffusion Prompt: A beautifully colored cat sitting in the clouds with a rainbow in the background, in the style of Andy Warhol

Like this?

import { z } from "zod";
import { zodToTs, printNode } from "zod-to-ts";
// Replace with your `openai` thing
import { openai } from "../openai.server";
import endent from "endent";
function createJSONCompletion<T extends z.ZodType>({
prompt,
schema_name,
@cedrickchee
cedrickchee / llama-7b-m1.md
Last active July 13, 2024 04:59
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

(venv) # Exit:0 2023-03-12 16:59:27 [r2q2@Reformer#[:~/opt/llama.cpp]
$(: !605 ) ./main -m ./models/65B/ggml-model-q4_0.bin -t 8 -n 128
main: seed = 1678658429
llama_model_load: loading model from './models/65B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 8192
llama_model_load: n_mult = 256
llama_model_load: n_head = 64
llama_model_load: n_layer = 80
@manniru
manniru / uuid.sh
Created October 17, 2022 04:20 — forked from markusfisch/uuid.sh
Generate a random UUID in bash
#!/usr/bin/env bash
# Generate a pseudo UUID
uuid()
{
local N B C='89ab'
for (( N=0; N < 16; ++N ))
do
B=$(( $RANDOM%256 ))
sudo tail -F /var/log/apache2/error.log