Skip to content

Instantly share code, notes, and snippets.

View jorge-ui's full-sized avatar
🎯
React.js! UI/UX/you name it! I build gui's for the web :)

Jorge Rivera jorge-ui

🎯
React.js! UI/UX/you name it! I build gui's for the web :)
  • Puerto Rico
View GitHub Profile
@cedrickchee
cedrickchee / llama-7b-m1.md
Last active October 22, 2025 20:38
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

@tayvano
tayvano / gist:6e2d456a9897f55025e25035478a3a50
Created February 19, 2017 05:29
complete list of ffmpeg flags / commands
Originall From: Posted 2015-05-29 http://ubwg.net/b/full-list-of-ffmpeg-flags-and-options
This is the complete list that’s outputted by ffmpeg when running ffmpeg -h full.
usage: ffmpeg [options] [[infile options] -i infile]… {[outfile options] outfile}…
Getting help:
-h — print basic options
-h long — print more options
-h full — print all options (including all format and codec specific options, very long)