Processing power is a foundation of human development.
-
build the fastest processor ever made.
-
ship chips more powerful than any GPU chip shipped.
-
become the leading raw compute provider in the world.
Fundamental Theorem of Optimization:
The less things you do, the fastest it runs. 🤯
Interaction Combinators are proven to be optimal in theory.
ANY computation, in ANY other computer, can be done with the same or less steps on Interaction Combinators.
- Lafont 1997
But can it be fast in practice?
-
Optlam-JS (2015): 3 MIPS
-
FmNet-C (2019): 35 MIPS
-
HVM1-RS (2022): 3,809 MIPS
-
HVM2-CUDA (2024): 76,118 MIPS
That's a 25372x speedup in 10 years.
This trend isn't stopping anywhere soon.
Low-Order Computations
-
utter garbage (C: X seconds, HVM: Y seconds)
-
excuse: no codegen obviously
Higher-Order Computations
-
very good (C: X seconds, HVM: Y seconds)
-
hater.bend failed to make Ocaml faster
-
Bend's core is identical to Rust's: the Affine λ-Calculus
-
We could actually apply the same optimizations and make C-like codegen!
-
Shall we? No. We're not here to create another Rust.
-
We're actually interested in these higher-order programs.
HVM sucks at this:
c program
HVM is great at this:
arc-agi program (enumerator)
"LLMs will never solve ARC-AGI"
The best solution to it is a "higher-order program".
Why make HVM a great C computer? We already have CPUs for that.
Let's explore the uncharted instead.
...
-
Evolutionary Algorithms
-
Sparse Neural Networks
-
Program Synthesis
-
Automated Theorem Proving
-
Satisfiability Solvers (SAT, SMT)
"Empirically old simple methods which were usually invented in the 80s and the 90s, when scaled up on very large clusters, work really well. (...) We do two simple reinforcement learning method, scaled it up, and discovered that it suddenly becomes very capable of solving extremely hard problems"
- Ilya Sutskever - GPT-2 (Matroid Scaled ML Conference 2019)
run higher-order programs 100x faster than HVM-CUDA
dozens of old AI algorithms could be attempted
we can build it
We're raising 0.00007 trillion
ROI: -98.57%
To make HPUs.
NOTE: ARC-AGI is a challenge to solve AGI (i.e., develop a general artificial intelligence). they have a 1 million prize pool for whoever does it first.