Skip to content

Instantly share code, notes, and snippets.

@mtisz
mtisz / start_miner.py
Last active October 22, 2021 04:53
A Python 3 script to run an Ethereum miner with self-monitoring and self-restart
import subprocess as sp
import logging
from abc import ABC
import time
from concurrent.futures import ThreadPoolExecutor
"""
Your terminal command goes here
"""
@mtisz
mtisz / convert.py
Created March 21, 2024 18:58 — forked from chu-tianxiang/convert.py
Convert grok-1 weight to torch
import numpy as np
import torch
import jax
from tqdm import tqdm
from model import LanguageModelConfig, TransformerConfig, QuantizedWeight8bit as QW8Bit
from runners import InferenceRunner, ModelRunner, sample_from_model
CKPT_PATH = "./checkpoints"
@mtisz
mtisz / deploy.yaml
Last active March 29, 2024 20:27
akash-axolotl
---
version: "2.0"
services:
service-1:
image: winglian/axolotl:main-py3.11-cu121-2.2.1
expose:
- port: 80
as: 80
to:
- global: true
@mtisz
mtisz / mixtral-8x22B.yaml
Created May 9, 2024 14:57
Axolotl Config for Mixtral-8x22B
base_model: mistral-community/Mixtral-8x22B-v0.1
model_type: MixtralForCausalLM
tokenizer_type: AutoTokenizer
is_mistral_derived_model: false
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
@mtisz
mtisz / llama-3-70B-qlora.yaml
Created May 15, 2024 16:47
Axolotl Config for Llama-3-70B QLoRA
base_model: meta-llama/Meta-Llama-3-70B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: /home/migel/ai_datasets/tess-v1.5b-chatml.jsonl
@mtisz
mtisz / inference_gemma2.py
Last active July 1, 2024 08:21
Gemma2 Inference
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# model_path = "/home/migel/gemma-2-27b"
model_path = "google/gemma-2-27b-it"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",