Skip to content

Instantly share code, notes, and snippets.

View rick2047's full-sized avatar

Paresh Mathur rick2047

  • ASML Netherlands
  • Eindhoven, The Netherlands
View GitHub Profile
@rain-1
rain-1 / llama-home.md
Last active April 28, 2024 18:42
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@qntmpkts
qntmpkts / install.sh
Last active April 13, 2022 09:07
Installer for oh-my-zsh on Termux
main() {
# Use colors, but only if connected to a terminal, and that terminal
# supports them.
if which tput >/dev/null 2>&1; then
ncolors=$(tput colors)
fi
if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
RED="$(tput setaf 1)"
GREEN="$(tput setaf 2)"
YELLOW="$(tput setaf 3)"
@unamashana
unamashana / hsi_notifications
Created February 3, 2011 18:28
Email notifications for HackerStreet India
require 'rubygems'
require 'rb-inotify'
require 'sqlite3'
require 'active_record'
require 'mail'
require 'log4r'
include Log4r
MYLOG = Logger.new 'mylog'
file = FileOutputter.new('fileOutputter', :filename => 'hsi_notifications.log', :trunch => false)