Skip to content

Instantly share code, notes, and snippets.

@nekiee13
nekiee13 / gist:cbdc058c13411048cfe6ff5fd2e22c0e
Created May 6, 2024 20:49
Llama-cpp-python Installation procedure
# HW
0. nVidia RTX 4090 (551.86 - latest or close to one) drivers, Intel CPU gen13, 64Gb RAM DDR5, Win11
Command prompt CLI
`nvidia-smi`
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.86 Driver Version: 551.86 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@nekiee13
nekiee13 / gist:c8ec43bce5fd75d20e38b31a613fd83d
Created January 30, 2024 03:56
Install Ollama under Win11 & WSL - CUDA Installation guide
CMD prompt - verify WSL2 is installed
`wsl --list --verbose`
or
`wsl -l -v`
git clone CUDA samples - I used location at disk d:\\LLM\\Ollama , so I can find samples with ease
`d: && cd d:\LLM\Ollama`
`git clone --recursive -j6 https://github.com/NVIDIA/cuda-samples.git`
@nekiee13
nekiee13 / Layout.py
Created November 30, 2023 22:10
Unexpected Layoutparser output
import os
import numpy as np
import pandas as pd
import json
from PIL import Image
from matplotlib import pyplot as plt
import pytesseract
from layoutparser.models.detectron2.layoutmodel import Detectron2LayoutModel
from layoutparser.elements import Layout, TextBlock, Rectangle
from layoutparser.file_utils import is_torch_cuda_available #, PathManager