Skip to content

Instantly share code, notes, and snippets.

View ianscrivener's full-sized avatar

Ian Scrivener ianscrivener

View GitHub Profile
@ianscrivener
ianscrivener / Install whatever node.js version on raspberry pi, inclusing armv6l Usage of unofficial builds of node.js to install node.js on Raspberry pi (armv6l)
#Download from https://unofficial-builds.nodejs.org/download/release/ the appropriate build for armv6l, example https://unofficial-builds.nodejs.org/download/release/v18.9.1/node-v18.9.1-linux-armv6l.tar.gz
wget https://unofficial-builds.nodejs.org/download/release/v18.9.1/node-v18.9.1-linux-armv6l.tar.gz
tar -xzf node-v18.9.1-linux-armv6l.tar.gz
cd node-v18.9.1-linux-armv6l
sudo cp -R * /usr/local
node -v
export AZ_MAIN_NAME=Kube2
export AZ_RG=RG_$AZ_MAIN_NAME
export AZ_VNET=VNET_$AZ_MAIN_NAME
export AZ_IP=Public_IP_$AZ_MAIN_NAME
export AZ_SUBNET=Subnet_$AZ_MAIN_NAME
export AZ_NSG=NetworkSecurityGroup_$AZ_MAIN_NAME
export AZ_NAME=VM_$AZ_MAIN_NAME
export AZ_NIC=NIC_$AZ_MAIN_NAME
ENV | grep AZ
@ianscrivener
ianscrivener / remove-systemctl-service.sh
Created December 29, 2023 11:08 — forked from binhqd/remove-systemctl-service.sh
Remove systemctl service
sudo systemctl stop [servicename]
sudo systemctl disable [servicename]
#rm /etc/systemd/system/[servicename]
#rm /etc/systemd/system/[servicename] symlinks that might be related
sudo systemctl daemon-reload
sudo systemctl reset-failed
from time import sleep
import ssl
import json
import os
from paho.mqtt.client import Client
username = "your VRM email"
password = "your VRM pasword"
portal_id = "your VRM portal ID"
@ianscrivener
ianscrivener / LM-Studio-preset.json
Last active April 17, 2024 09:41
LM-Studio-preset.json
{
"name": "My New Config Preset",
"load_params": {
"n_ctx": 1500,
"n_batch": 512,
"rope_freq_base": 10000,
"rope_freq_scale": 1,
"n_gpu_layers": 1,
"use_mlock": true,
"main_gpu": 0,
@ianscrivener
ianscrivener / setup.sh
Created July 14, 2023 23:00
setup NVidia GPU Docker for llama.cpp and run perplexity test
# BTW: we are running in a nvidia/cuda:11.x.x-devel-ubuntu22.04
# install some extra Ubuntu packages
apt install unzip libopenblas-dev nano git-lfs aria2c jq build-essential python3 python3-pip git -y
pip install --upgrade pip setuptools wheel
# clone llama.cpp repo
cd /workspace
git clone https://github.com/ggerganov/llama.cpp.git

Test

  • try installing llama-cpp-python and llama-cpp-python[server] from pip ... WITH ggml-metal.metal (WITH Metal GPU support) file in python executable directory

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS

Steps

Test

  • try rebuilding llama-cpp-python and llama-cpp-python[server] with GPU support and WITH ggml-metal.metal (WITH Metal GP support) to python executable directory

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS

Test

  • try adding ggml-metal.metal (WITH Metal GP support) to python executable directory

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS

Test

  • try rebuild llama.cpp with Metal GPU support

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS 😞