Skip to content

Instantly share code, notes, and snippets.


Block or report user

Report or block MaiaVictor

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
MaiaVictor / gist:b735f9e06f8461585974
Last active Aug 29, 2015
Implementing a non-recursive fibonacci on the lambda-calculus and testing it on Optlam.js.
View gist:b735f9e06f8461585974
(This is a response to @chi and @Tom Ellis on [this post](
This non-recursive fib implementation, in Haskell:
import Prelude hiding (pred)
data Nat = Succ Nat | Zero deriving Show
pred (Succ pred) = pred
pred Zero = Zero
MaiaVictor / gist:f25e12a6c85856d8f185
Created Jul 30, 2015
Just a small follow-up to the fib function on Optlam.
View gist:f25e12a6c85856d8f185
This is a follow-up to [my previous post](
There is a small remark to make regarding the Fibonacci function. The previous
post takes the premise the "natural" way to encode fib is:
fib 1 = 1
fib x = fib (x - 1) + fib (x - 2)
But, lets take a look at the actual recursive schema for the fib sequence:
MaiaVictor / lambdaSource.js
Last active Oct 19, 2015
Infer the source code of a pure JavaScript function.
View lambdaSource.js
// This infers the source code of a pure JavaScript function,
// that is, one consisting of nothing but single-argumented
// lambdas and applications, by applying it to values. Ex:
// lambdaSource(function(x){return x(x)}) == "λx.(x x)"
function lambdaSource(value){
var nextVarId = 0;
return (function normalize(value){
// This is responsible for collecting the argument list of a bound
// variable. For example, in `function(x){return x(a)(b)(c)}`, it
View gist:8d4909dd866dec48017b
Verifying that +maiavictor is my blockchain ID.
MaiaVictor / optlam
Last active Apr 18, 2016
simplified optlam
View optlam
module.exports = (function(){
var lc = require("./lambda.js");
function Link(node, port){
this.node = node;
this.port = port;
var next_name = 0;
function Node(color){
this.n = ++next_name;
this.k = color;
View lpu_v0.c
#include <stdio.h>
void swap(int* a, int* b){
int temp = *a;
*a = *b;
*b = temp;
void swap4(int* a, int* b){
for (int i=0; i<4; ++i)
View LPU_v1.c
#include <stdio.h>
void swap(int* a, int* b){
int temp = *a;
*a = *b;
*b = temp;
void swap4(int* a, int* b){
for (int i=0; i<4; ++i)
MaiaVictor / lpu_concept.txt
Created May 7, 2016
explanation of the lpu concept
View lpu_concept.txt
I didn't see your last message, sorry. So, grab a popcorn, I guess what I'll say will make you glad.
Remember I suggested the biggest issue with practical implementations of interaction combinators was that nodes that wire together become distant in memory? That is, as you keep allocating/freeing nodes, the heap becomes a pointer spaghetti. This takes the cache efficiency away and makes the algorithm very difficult to parallelize, because GPUs are made for vectors, not dynamically expanding graphs. So, while, in theory, we count the complexity of a λ-calculus term as the number of rewrite rules necessary to get to its normal form, in practice, the time it takes for the program to finish isn't within a constant of the number of necessary rewrite rules.
LPU solves this problem. The idea is that the memory should be split in millions of very small (currently, 128 bit) blocks, each one with a very simple processing core, and capable of communicating with its neighbors. A computer would consist of millions of th
#include <stdio.h>
// Uncomment the line with the desired difficulty you want to benchamrk.
//const int difficulty = 1; const int memory_nodes = 20; const int clocks = 19; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,3,-6,8,9,0,0,0,0,3,-7,-8,-9,0,0,0,0,3,-5,10,11,0,0,0,0,3,-10,12,13,0,0,0,0,3,-11,-12,-13,0,0,0,0,3,-3,14,-14};
const int difficulty = 2; const int memory_nodes = 48; const int clocks = 61; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,4,-6,8,9,0,0,0,0,3,-8,10,11,0,0,0,0,3,-7,-10,12,0,0,0,0,3,-9,-11,-12,0,0,0,0,3,-5,13,14,0,0,0,0,5,-13,15,16,0,0,0,0,3,-15,17,18,0,0,0,0,3,-14,-17,19,0,0,0,0,3,-16,-18,-19,0,0,0,0,3,-3,20,-20};
//const int difficulty = 3; const int memory_nodes = 60; const int clocks = 105; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,5,-6,8,9,0,0,0,0,3,-8,10,11,0,0,0,0,3,-7,-10,12,0,0,0,0,3,13,14,-12,0,0,0,0,4,-9,15,-13,0,0,0,0,3,-15,-11,-14,0,0,0,0,3,-5,16,17,0,0,0,0,7,-16,18,19,0,0,0,0,3,-18,20,
MaiaVictor / evidence.js
Last active Aug 15, 2016
As of 2016, manually inlining JS funcalls seemingly still beats engines by a large margin
View evidence.js
// This is supporting evidence that manually inlining functions still beats
// most JS engines. There are three equivalent functions, `a`, `b` and `c`,
// with varying levels of manual inlining. On my tests, (node v5.9.1, 2016
// macbook 12"), `c` is approx. 17 times faster than `a`, and 3.5 times faster
// than `b`. (Edit: the difference increased further by avoiding .apply)
function V3(x, y, z){ return {x: x, y: y, z: z}; };
function Qt(x, y, z, w){ return {x: x, y: y, z: y, w: w}; };
function Cam(pos, rot){ return {pos: pos, rot: rot}; };
function rotate(q, v){
You can’t perform that action at this time.