Skip to content

Instantly share code, notes, and snippets.

View tensojka's full-sized avatar

Ondřej Sojka tensojka

  • Carmine Finance
  • Brno, Czech Republic
View GitHub Profile
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 22, 2024 08:47
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@t0mm4rx
t0mm4rx / impermax.py
Created January 20, 2022 11:02
Python script to fetch Impermax.finance position total collateral and debt
"""Impermax related functions.
imxc = Impermax collateral token, given in exchange of the actual LP pair.
slp = staked LP token.
"""
from providers import polygon
from chain_data import get_lp_pair_holdings, get_token_decimals
import json
imxc_abi = json.load(open("./abis/imxc.json", "rb"))