Skip to content

Instantly share code, notes, and snippets.

@leventov
leventov / dialogue.md
Created April 4, 2023 04:12
A dialogue between Roman Leventov and ChatGPT-4 (March 14 version) on value and meaning

Roman: Imagine an agent made of yourself (i.e., GPT-4 model) with some software harness around. This software harness models plan execution by keeping a hierarchical tree of goals and tasks and supplying it in the beginning of the GPT-4 context, where the GPT-4 model is repeatedly asked to either create or update a plan for achieving a specific goal or task in the hierarchy, or drill down and execute a sub-task, or assess whether a task or goal is achieved and move to the next step in the higher-level task.

With some periodicity, when the highest-level task in the tree is completed, GPT-4 is prompted with the question "What I should do next?", to create a new highest-level goal or task.

How would you answer this question if you was a part of such an agent?

GPT: As a part of such an agent, when prompted with the question "What should I do next?" after completing the highest-level task in the tree, I would analyze the current context, available resources, and any constraints or priorities that might

import (
"log"
"math/rand"
"testing"
"github.com/montanaflynn/stats"
"gonum.org/v1/gonum/floats"
)
func TestMidCardinalitySimulation(t *testing.T) {

Keybase proof

I hereby claim:

  • I am leventov on github.
  • I am leventov (https://keybase.io/leventov) on keybase.
  • I have a public key ASDzs8Wl-lVqJjYp08P567ms5Hx5uQ9qFvxqOwkyfiQ3LQo

To claim this, I am signing this object:

@leventov
leventov / hash_table.py
Last active December 6, 2015 05:45
Proposed optimization for shift removal procedure in hash tables with linear probing. Inspired by https://github.com/apple/swift/blob/8d9ef80304d7b36e13619ea50e6e76f3ec9221ba/stdlib/public/core/HashedCollections.swift.gyb#L3221-L3279
# (Pseudocode)
def shift_distance(chain_end, base):
return (chain_end - base) & (capacity - 1)
def next(index):
return (index + 1) & (capacity - 1)
def prev(index):
package tests;
import java.util.*;
import java.util.Map.Entry;
import java.util.concurrent.TimeUnit;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
@leventov
leventov / Hashing.java
Last active August 29, 2015 14:25
Zero-allocation-hashing benchmarks
package tests;
import com.google.common.hash.HashFunction;
import net.openhft.chronicle.hash.hashing.Accesses;
import net.openhft.chronicle.hash.hashing.Hasher;
import net.openhft.hashing.LongHashFunction;
import org.openjdk.jmh.annotations.*;
import java.util.concurrent.TimeUnit;
package spoon.test;
import spoon.reflect.declaration.*;
import java.util.ArrayList;
import java.util.IntSummaryStatistics;
import java.util.TreeSet;
import java.util.function.Consumer;
import java.util.function.ToIntFunction;
@leventov
leventov / analyze.py
Created September 4, 2014 23:53
См. http://habrahabr.ru/post/235689/. Инструкция по применению в первом комментарии
import json
MIN_STARS = 700
with open('{}.json'.format(MIN_STARS)) as json_file:
repos = json.load(json_file)
repos_by_language = {}
by_top = {}
for repo in repos:
@leventov
leventov / memory.txt
Created July 21, 2014 20:00
Results of "time vs memory" benchmark: https://github.com/OpenHFT/hftc/tree/25511ccb8dd1afd0c11cc2233f24bdd98ffdb108/benchmarks/time-vs-memory. Information about the runsite: Intel Sandy Bridge (2011), L1: 64 KB, L2: 256 KB, L3: 20 MB, 128 GB of RAM. Java8u5.
loadLevel collections overuseFactor
1 hftc 7.226
1 hftc 10.791
1 hftc 7.470
1 hftc 11.172
1 hftc 7.748
1 hftc 11.576
1 hftc 8.039
1 hftc 11.993
1 hftc 8.339
package tests;
import org.openjdk.jmh.annotations.*;
import java.util.*;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
@BenchmarkMode(Mode.AverageTime)
@Measurement(time = 100, timeUnit = TimeUnit.MILLISECONDS)