Skip to content

Instantly share code, notes, and snippets.

View abyesilyurt's full-sized avatar

Aziz Berkay Yesilyurt abyesilyurt

  • Netherlands
View GitHub Profile
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 22, 2024 08:47
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@nico-lab
nico-lab / h264_nvenc.txt
Last active April 3, 2024 11:16
ffmpeg -h encoder=h264_nvenc
Encoder h264_nvenc [NVIDIA NVENC H.264 encoder]:
General capabilities: dr1 delay hardware
Threading capabilities: none
Supported hardware devices: cuda cuda d3d11va d3d11va
Supported pixel formats: yuv420p nv12 p010le yuv444p p016le yuv444p16le bgr0 bgra rgb0 rgba x2rgb10le x2bgr10le gbrp gbrp16le cuda d3d11
h264_nvenc AVOptions:
-preset <int> E..V....... Set the encoding preset (from 0 to 18) (default p4)
default 0 E..V.......
slow 1 E..V....... hq 2 passes
medium 2 E..V....... hq 1 pass
@erikzenker
erikzenker / CMakeLists.txt
Last active April 15, 2024 17:17
CMake CUDA + C++ in separate files
# CMAKE FILE to separatly compile cuda and c++ files
# with the c++11 standard
#
#
# Folder structure:
#
# |
# +--main.cpp (with C++11 content)
# +--include/
# | |