Skip to content

Instantly share code, notes, and snippets.

@adtac
adtac / Dockerfile
Last active May 28, 2024 01:38
#!/usr/bin/env docker run
#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"
# syntax = docker/dockerfile:1.4.0
FROM node:20
WORKDIR /root
RUN npm install sqlite3
@veekaybee
veekaybee / normcore-llm.md
Last active July 6, 2024 12:01
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

"Don Knuth Plays with ChatGPT" but with ChatGPT-4

1. Tell me what Donald Knuth says to Stephen Wolfram about chatGPT.

I'm sorry for any confusion, but as an AI, I can't provide real-time updates or specific conversations between individuals beyond my last training cut-off in September 2021. I don't have access to the internet or any databases for real-time information or private communications. Therefore, I can't give you the details of a conversation between Donald Knuth and Stephen Wolfram about ChatGPT, if such a conversation has taken place after my training data.

However, it's worth noting that both Donald Knuth and Stephen Wolfram are significant figures in the fields of computer science and computational mathematics. Knuth is widely recognized for his multi-volume work "The Art of Computer Programming" and for creating the TeX computer typesetting system, while Wolfram is known for his work in developing Mathematica and Wolfram|Alpha, and for his contributions to cellular automata theory, outl

Training open-source LLMs on ChatGPT output is a really bad idea.

Everyone is now racing to create open-source alternatives to compete with GPT3.5/GPT4. A common shortcut used by some teams to bootstrap their effort is to fine-tune their model on ChatGPT output. I used to think it was a good idea and totally fair play to do this. Actually, I still think it’s fair play. OpenAI effectively distilled the entire web into its models. They are saying themself that they are using publicly accessible information (mostly). So distilling their model is, in effect, distilling the public open web, so small Term of Service details aside, I don’t see major ethical problems with that. Right? Well, it’s not entirely true and I realized now that, even when ignoring the ethical considerations, using their output is a really bad idea.

First of all, from a purely technical point of view, as @yoavgo is explaining it beautifully in his recent post, there is no way to align LLMs correctly without the RLHF component. I encourag

@rain-1
rain-1 / LLM.md
Last active July 3, 2024 15:04
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

@shawwn
shawwn / What happens when you allocate a JAX tensor on a TPU.md
Last active April 15, 2023 04:11
JAX C++ stack trace walkthrough for TpuExecutor_Allocate
@nitred
nitred / optimal_mtu.md
Last active July 4, 2024 04:10
Wireguard Optimal MTU

About

  • I faced bandwidth issues between a WG Peer and a WG server. Download bandwidth when downloading from WG Server to WG peer was reduced significantly and upload bandwidth was practically non existent.
  • I found a few reddit posts that said that we need to choose the right MTU. So I wrote a script to find an optimal MTU.
  • Ideally I would have liked to have run all possible MTU configurations for both WG Server and WG Peer but for simplicity I choose to fix the WG Server to the original 1420 MTU and tried all MTUs from 1280 to 1500 for the WG Peer.

Testing

  • On WG server, I started an iperf3 server
  • On WG peer, I wrote a script that does the following:
    • wg-quick down wg0
  • Edit MTU in the /etc/wireguard/wg0.conf file
@josharian
josharian / iossize.go
Last active February 19, 2023 14:21
calculate dirty data pages for macho binary
// Command iossize predicts the impact
// on memory usage of changes to the binary.
//
// This implementation uses dyldinfo to determine
// exactly how much memory the dynamic linker will dirty
// as part of launching the executable.
//
// It is not perfect. There are multiple sources of dirty pages:
//
// 1. dynamic loader rebase/relocs due to our code
// This can grow a Robin Hood linear probing hash table near word-at-a-time memcpy speeds. If you're confused why I use 'keys'
// to describe the hash values, it's because my favorite perspective on Robin Hood (which I learned from Paul Khuong)
// is that it's just a sorted gap array which is MSB bucketed and insertion sorted per chain:
// https://pvk.ca/Blog/2019/09/29/a-couple-of-probabilistic-worst-case-bounds-for-robin-hood-linear-probing/
// The more widely known "max displacement" picture of Robin Hood hashing also has strengths since the max displacement
// can be stored very compactly. You can see a micro-optimized example of that here for small tables where the max displacement
// can fit in 4 bits: Sub-nanosecond Searches Using Vector Instructions, https://www.youtube.com/watch?v=paxIkKBzqBU
void grow(Table *table) {
u64 exp = 64 - table->shift;
// We grow the table downward in place by a factor of 2 (not counting the overflow area at table->end).
@rorycl
rorycl / ed25519genandsave.go
Created June 5, 2021 16:14
Generate ed25519 keys in PEM format using Go
// RCL 05 June 2021
/*
verify with `openssl pkey -in <privatekey>` or `openssl pkey -in <privatekey> -pubout`
the latter should match the publickey
*/
package main
import (