Skip to content

Instantly share code, notes, and snippets.

@mseri
mseri / code_completion_ide.py
Created November 2, 2024 16:07 — forked from iamaziz/code_completion_ide.py
simple offline code completion example with ollama/streamlit and code execution
import sys
from io import StringIO
import streamlit as st # pip install streamlit
from code_editor import code_editor # pip install streamlit_code_editor
import ollama as ol # pip install ollama
st.set_page_config(layout='wide')
st.title('`Offline code completion`')
@mseri
mseri / l3min.py
Created November 2, 2024 16:06 — forked from awni/l3min.py
A minimal, fast implementation of Llama 3.1 in MLX.
"""
A minimal, fast example generating text with Llama 3.1 in MLX.
To run, install the requirements:
pip install -U mlx transformers fire
Then generate text with:
python l3min.py "How tall is K2?"
@mseri
mseri / chat-interface.html
Created October 23, 2024 14:13
Simple html chat interface, made with Claude, used to interact with LM Studio, Ollama or any other openai compatible server (I am using it with firefox new ai panel)
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>LM Studio Chat Interface</title>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI",
Roboto, sans-serif;
@mseri
mseri / lms_to_llm.py
Last active November 5, 2024 09:46
Get models from LM Studio server and prepare them for llm. On Mac the output goes to `Library/Application\ Support/io.datasette.llm/extra-openai-models.yaml`
import requests
import subprocess
import yaml
def get_data_from_api():
base_url = "http://localhost:1234/v1"
response = requests.get(base_url + "/models")
if response.status_code == 200:
json_data = response.json()
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/user"
"path/filepath"
"strings"
@mseri
mseri / ollama-export.sh
Created October 10, 2024 08:12 — forked from supersonictw/ollama-export.sh
Ollama Model Export Script
#!/bin/bash
# Ollama Model Export Script
# Usage: bash ollama-export.sh vicuna:7b
# License: MIT (https://ncurl.xyz/s/o_o6DVqIR)
# https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553
# Interrupt if any error occurred
set -e
# Declare
@mseri
mseri / htm_to_md.sh
Last active September 18, 2024 09:42
Use reader-lm locally with ollama
# Faster but not private way to achive the above
# is to define a bash function and source it at startup
function html_to_md () {
if [[ $# -eq 2 ]]; then
curl "https://r.jina.ai/$1" > "$2".md
echo "Content saved to \"$2\".md"
else
curl "https://r.jina.ai/$@"
fi
}
@mseri
mseri / lwarp-tips.md
Created August 3, 2024 21:41 — forked from DominikPeters/lwarp-tips.md
Some tips for using lwarp for Latex to HTML conversion

Replace natbib by biblatex to get citation links

\usepackage[backend=bibtex, style=authoryear-comp, natbib=true, sortcites=false]{biblatex}
\addbibresource{main.bib}

% optional if you want (Smith 1776) instead of (Smith, 1776)
\renewcommand*{\nameyeardelim}{\addspace}

\begin{document}
@mseri
mseri / chladni.py
Created April 9, 2024 19:51 — forked from profConradi/chladni.py
Chladni Square Plate Normal Modes Simulation
def energy(z, n, m, L):
return np.cos(n* np.pi *np.real(z) / L) *np.cos(m *np.pi*np.imag(z) / L) - np.cos(m*np.pi*np.real(z)/ L) *np.cos(n* np.pi *np.imag(z) / L)
class ChladniPlate:
def __init__(self, n, m, L=1, n_particles=10000):
self.L = L
self.n_particles = n_particles
self.n = n
self.m = m
@mseri
mseri / hatched.py
Created March 1, 2022 11:06
Hatched contourf plots in matplolib
# From https://github.com/matplotlib/matplotlib/issues/2789/#issuecomment-604599060
# Content as in example
# ------------------------------
import matplotlib.pyplot as plt
import numpy as np
# invent some numbers, turning the x and y arrays into simple
# 2d arrays, which make combining them together easier.
x = np.linspace(-3, 5, 150).reshape(1, -1)
y = np.linspace(-3, 5, 120).reshape(-1, 1)