Skip to content

Instantly share code, notes, and snippets.

@jolod
jolod / llama-7b-m1.md
Created May 24, 2023 18:41 — forked from cedrickchee/llama-7b-m1.md
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

@jolod
jolod / AoC-2017-01-part1.sh
Last active December 18, 2017 11:12
AoC 2017
perl -nE'$s = 0; $s += $_ for /(\d)(?=\1)/g, /^(\d).*\1$/; say $s' < day1.txt
# I rewrote the code so that you can keep the os calls unchanged and only add yield before it.
# The interpreter is now represented by a class with static methods instead. The class only
# serves as a dictionary of functions.
import collections
import os
import types
Op = collections.namedtuple('Op', ['op', 'args', 'kwargs'])