Skip to content

Instantly share code, notes, and snippets.

View mevanloon's full-sized avatar

Matthijs Ewoud van Loon mevanloon

View GitHub Profile
MAILTO=""
*/1 * * * * cd $HOME/public_html/ && $HOME/bin/wp cron event run --due-now && date+%s >> latestcron.txt > /dev/null }} true
@mevanloon
mevanloon / llama-7b-m1.md
Created March 15, 2023 14:32 — forked from cedrickchee/llama-7b-m1.md
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

import SwiftUI
import PlaygroundSupport
struct BadgesHeightPreferenceKey: PreferenceKey {
static var defaultValue: Int = 0
static func reduce(value: inout Int, nextValue: () -> Int) {
value = nextValue()
}
#!/bin/bash
##### Fill out the following #####
# Your domains
domain="example.com"
additional_domains="www.$domain" # colon-separated (:)
# DirectAdmin login
user="debXXX"
password=""
# For Let's Encrypt notifications