Skip to content

Instantly share code, notes, and snippets.

@FeepingCreature
FeepingCreature / git-centralize.py
Created July 4, 2024 10:30
git centralize tool, written by opus 3.5 sonnet
#!/usr/bin/env python3
# Warning: Undertested! May corrupt your git repos!
import os
import sys
import subprocess
import re
import argparse
import shutil
@FeepingCreature
FeepingCreature / event_queue_hang.md
Last active June 30, 2024 09:46
Investigating an Event Queue Hang: The Code Works Correctly

Investigating an Event Queue Hang

We had a fun bug last week that almost doubles as a logic puzzle. It makes perfect sense in hindsight, but as I was staring at it, it seemed every part of it was nailed down tight - until I spotted the obvious, natural assumption that had been violated from the start.

A bit of background. We're working with microservices propagating state with event streams. For a typical stream, most events will concern unrelated objects, so we parallelize them: each incoming event is assigned a key, and we accept parallel events so long as their keys don't collide with any events currently in progress. Furthermore, because we track our position in the input stream as a single number, we can't retire events as soon

Me

Hi! Programming challenge. I'm gonna paste the convo from IRC, names anonymized. Same challenge goes to you. :)
[16:31] <foo> do you wanna hear some nerd shit about my event queue impl and why it started hanging forever once we started using google rpc
[16:31] <baz> Shore
[16:31] * foo spent three days on this bug
[16:31] <bar> Distributed stuff?
[16:31] <foo> okay so we have an eventstoredb client at work that used to use the tcp api but we're switching to the (cool, new) eventstore db
[16:31] <foo> er\

#!/usr/bin/env python3
# Helper script that classifies the structure of a JSON object.
# Useful for getting an overview of novel JSON data.
# Created largely by Claude 3 Opus.
import json
import sys
class JSONType:
def is_similar(self, other):
@FeepingCreature
FeepingCreature / heapsizefactor.md
Last active January 11, 2024 13:29
The effect of heapSizeFactor on CPU and memory usage

The D GC

The D GC is tuned by default to trade memory for performance. This can be clearly seen in the default heap size target of 2.0, ie. the GC will prefer to just allocate more memory until less than half the heap memory is alive. But with long-running user-triggered processes, memory can be more at a premium than CPU is, and larger heaps also mean slower collection runs. Can we tweak GC parameters to make D programs use less memory? More importantly, what is the effect of doing so?

Adjustable parameters

auto service1 = ...;
assert(service1 !is null);
Service[] services = [service1, service2];
assert(services[0] !is null); /// this fails??
module test;
import core.memory : GC;
import core.sync.semaphore;
import core.sys.linux.sys.mman : MADV_DONTNEED, madvise;
import core.thread;
import std;
enum keys = [
"foo", "bar", "baz", "whee",
module test;
import core.memory : GC;
import core.sync.semaphore;
import core.thread;
import std;
static string[] keys = [
"foo", "bar", "baz", "whee",
"foo1", "foo2", "foo3", "foo4", "foo5", "foo6", "foo7", "foo8",
module test;
import core.memory : GC;
import core.sync.semaphore;
import core.thread;
import std;
static string[] keys = [
"foo", "bar", "baz", "whee",
"foo1", "foo2", "foo3", "foo4", "foo5", "foo6", "foo7", "foo8",
int main() {
unsigned char *target = mmap(NULL, 1024, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
Buffer buffer = {target, 1024, 0};
char *param = "Hello World\n";
void (*fn)(char*) = (void(*)(char*)) printf;
append_x86_64_push_reg(&buffer, X86_64_RBP);
append_x86_64_set_reg_reg(&buffer, X86_64_RBP, X86_64_RSP);
append_x86_64_set_reg_imm(&buffer, X86_64_RDI, (size_t) param);
append_x86_64_set_reg_imm(&buffer, X86_64_RAX, (size_t) fn);
append_x86_64_call_reg(&buffer, X86_64_RAX);