Skip to content

Instantly share code, notes, and snippets.

View funkytaco's full-sized avatar

Luis Gonzalez funkytaco

View GitHub Profile
{
"model_type": "mlm",
"tamm_id": "afm-text-30b-instruct-v5-astc-6x6-20240709",
"checkpoint": "model.mlm",
"tokenizer": "afm-text-instruct-multilingual-100k-20240701",
"original_checkpoint": "bolttorchmodel://x5bhyxgsn7/440",
"export_date": "07/22/2024-11:36:33",
"mlm_config": {
"model_name": "ajax",
"backend": "metal",
@funkytaco
funkytaco / prompt_alpaca_lora.py
Created July 26, 2023 05:09 — forked from ahoho/prompt_alpaca_lora.py
Create a huggingface pipeline with a lora-trained alpaca
from typing import Optional, Any
import torch
from transformers.utils import is_accelerate_available, is_bitsandbytes_available
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
pipeline,
@funkytaco
funkytaco / llama2-mac-gpu.sh
Created July 21, 2023 20:57 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"
@funkytaco
funkytaco / Llama-2-13B-chat-M1.md
Created July 21, 2023 05:34 — forked from gengwg/Llama-2-13B-chat-M1.md
Run Llama-2-13B-chat locally on M1 Macbook with GPU inference

Clone

gengwg@gengwg-mbp:~$ git clone https://github.com/ggerganov/llama.cpp.git
Cloning into 'llama.cpp'...
remote: Enumerating objects: 5267, done.
remote: Counting objects: 100% (2065/2065), done.
remote: Compressing objects: 100% (320/320), done.
remote: Total 5267 (delta 1878), reused 1870 (delta 1745), pack-reused 3202
Receiving objects: 100% (5267/5267), 4.24 MiB | 13.48 MiB/s, done.
@funkytaco
funkytaco / Dockerfile
Last active July 11, 2023 12:49 — forked from Rafat97/1.md
🐳🐳 Laravel Docker Compose 🐳🐳
#####################################
#
# php:7.4-apache setup
#
#####################################
FROM php:7.4-apache
USER root
WORKDIR /var/www/html
1561572401,horse-uat,Ubuntu,16,16.04,xenial
1561572405,moose-uat,Ubuntu,16,16.04,xenial
1561572408,duck-uat,Ubuntu,16,16.04,xenial
1561572413,goat-uat,Ubuntu,16,16.04,xenial
1561572415,horse-dev,Ubuntu,16,16.04,xenial
1561571759,moose-dev,Amazon,2016,NA,NA
1561572422,duck-dev,Ubuntu,16,16.04,xenial
1561572426,goat-dev,Ubuntu,14,14.04,trusty
@funkytaco
funkytaco / ansible-aws-inventory-main.yml
Created October 25, 2022 13:59 — forked from nivleshc/ansible-aws-inventory-main.yml
The main inventory file - declare variables here. This calls the worker file (which must be prese
---
# Name: ansible-aws-inventory-main.yml
# Description: this is the main file that calls the worker file (ansible-aws-inventory-worker.yml) to create an inventory of all the
# specific aws resources.
# Below are the resources that will be inventoried
# - vpc
# - subnet
# - igw
# - cgw
# - vgw
@funkytaco
funkytaco / gist:6c597be7bb5725919f2efb262aec7aad
Created October 7, 2022 19:51
purestorage.flasharray examples
To check whether it is installed, run ansible-galaxy collection list.
To install it, use: ansible-galaxy collection install purestorage.flasharray.
To use it in a playbook, specify: purestorage.flasharray.purefa_volume.
name: Create new volume named foo with a QoS limit
purefa_volume:
name: foo
@funkytaco
funkytaco / kubectl-cheat.md
Created June 16, 2022 17:44 — forked from b4nst/kubectl-cheat.md
[Kubectl cheat sheet] Kubectl useful stuff #docker #kubectl #kubernetes
@funkytaco
funkytaco / charts_compare.sh
Created August 19, 2021 15:24 — forked from ptx96/charts_compare.sh
./charts_compare.sh <TEMPLATENAME>
#!/bin/bash
set -euo
TEMPLATE=$1
yttFolder="ckd-capsule-app/"
helmFolder="capsule"
pushd $yttFolder