Skip to content

Instantly share code, notes, and snippets.

View mmizutani's full-sized avatar

Minoru Mizutani mmizutani

  • Tokyo
  • 18:29 (UTC +09:00)
View GitHub Profile
@veekaybee
veekaybee / normcore-llm.md
Last active April 23, 2024 16:03
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@jvelezmagic
jvelezmagic / main.py
Last active March 25, 2024 09:46
QA Chatbot streaming with source documents example using FastAPI, LangChain Expression Language, OpenAI, and Chroma.
"""QA Chatbot streaming using FastAPI, LangChain Expression Language , OpenAI, and Chroma.
Features
--------
- Persistent Chat Memory:
Stores chat history in a local file.
- Persistent Vector Store:
Stores document embeddings in a local vector store.
- Standalone Question Generation:
Rephrases follow-up questions to standalone questions in their original language.
@doraTeX
doraTeX / ocr.sh
Last active April 19, 2024 13:04
A shell script to perform OCR on images/PDFs using macOS built-in OCR engine
#!/bin/bash
SCRIPTNAME=$(basename "$0")
function realpath () {
f=$@;
if [ -d "$f" ]; then
base="";
dir="$f";
else
base="/$(basename "$f")";
@rain-1
rain-1 / llama-home.md
Last active February 22, 2024 05:52
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@aspick
aspick / ruby_wasm.md
Last active December 20, 2022 03:27
ruby.wasm を使って Ruby アプリケーションのシングルバイナリを作成する

requirements

steps

# follow `ruby.wasm` tutorial
curl -LO https://github.com/ruby/ruby.wasm/releases/latest/download/ruby-head-wasm32-unknown-wasi-full.tar.gz
@yusukebe
yusukebe / client.ts
Last active December 10, 2022 00:52
import { Client } from '../../../src/middleware/client'
import type { AppType } from './server'
const client = new Client<AppType>('http://127.0.0.1:8787/api')
const res = await client.json('/posts', {
id: 123,
title: 'hello',
})
@dgcoffman
dgcoffman / require-named-effect.js
Created May 23, 2022 16:16
libs/eslint-rules/require-named-effect.js
const isUseEffect = (node) => node.callee.name === 'useEffect';
const argumentIsArrowFunction = (node) => node.arguments[0].type === 'ArrowFunctionExpression';
const effectBodyIsSingleFunction = (node) => {
const { body } = node.arguments[0];
// It's a single unwrapped function call:
// `useEffect(() => theNameOfAFunction(), []);`
if (body.type === 'CallExpression') {
@mislav
mislav / config.json
Last active January 18, 2024 10:48
Experiment in using GitHub CLI to authenticate fetching docker images from GitHub Package Registry
// ~/docker/config.json
{
"credsStore": "desktop",
"credHelpers": {
"docker.pkg.github.com": "gh",
"ghcr.io": "gh"
}
}
@greymd
greymd / sudo新一.md
Last active April 25, 2024 06:34
sudo新一

 オレは高校生シェル芸人 sudo 新一。幼馴染で同級生の more 利蘭と遊園地に遊びに行って、黒ずくめの男の怪しげな rm -rf / 現場を目撃した。端末をみるのに夢中になっていた俺は、背後から近づいてきたもう1人の --no-preserve-root オプションに気づかなかった。 俺はその男に毒薬を飲まされ、目が覚めたら・・・ OS のプリインストールから除かれてしまっていた!

sudo がまだ $PATH に残っていると奴らにバレたら、また命を狙われ、他のコマンドにも危害が及ぶ』

 上田博士の助言で正体を隠すことにした俺は、 which に名前を聞かれて、とっさに『gnuplot』と名乗り、奴らの情報をつかむために、父親がシェル芸人をやっている蘭の $HOME に転がり込んだ。ところが、このおっちゃん・・・とんだヘボシェル芸人で、見かねた俺はおっちゃんになりかわり、持ち前の権限昇格能力で、次々と難タスクを解決してきた。おかげで、おっちゃんは今や世間に名を知られた名エンジニア、俺はといえばシェル芸 bot のおもちゃに逆戻り。クラスメートの convertojichattextimg にお絵かきコマンドと誤解され少年ワンライナーお絵かき団を結成させられる始末。

 ではここで、博士が作ってくれたメカを紹介しよう。最初は時計型麻酔 kill 。ふたについた照準器にあわせてエンターを押せば、麻酔シグナルが飛び出し、プロセスを瞬時に sleep させることができる。 次に、蝶ネクタイ型 banner 。裏についているダイヤルを調整すれば、ありとあらゆる大きさのメッセージを標準出力できる。必殺のアイテムなら fork 力増強シューズ。電気と磁力で足を刺激し、 :(){ :|:&amp; };: でプロセステーブ

@fernandoaleman
fernandoaleman / mysql2-m1.md
Last active April 22, 2024 23:15
How to install mysql2 gem on m1 Mac

Problem

Installing mysql2 gem errors on Apple silicon M1, M2 or M3 Mac running macOS Sonoma.

Solution

Make sure mysql-client, openssl and zstd are installed on Mac via Homebrew.

Replace mysql-client with whichever mysql package you are using