Skip to content

Instantly share code, notes, and snippets.

View VictorTaelin's full-sized avatar

Victor Taelin VictorTaelin

View GitHub Profile
@VictorTaelin
VictorTaelin / gist:b735f9e06f8461585974
Last active August 29, 2015 14:26
Implementing a non-recursive fibonacci on the lambda-calculus and testing it on Optlam.js.
(This is a response to @chi and @Tom Ellis on [this post](http://stackoverflow.com/questions/31707614/why-are-%CE%BB-calculus-optimal-evaluators-able-to-compute-big-modular-exponentiation).)
This non-recursive fib implementation, in Haskell:
import Prelude hiding (pred)
data Nat = Succ Nat | Zero deriving Show
pred (Succ pred) = pred
pred Zero = Zero
@VictorTaelin
VictorTaelin / gist:f25e12a6c85856d8f185
Created July 30, 2015 07:12
Just a small follow-up to the fib function on Optlam.
This is a follow-up to [my previous post](https://gist.github.com/SrVictorMaia/b735f9e06f8461585974).
There is a small remark to make regarding the Fibonacci function. The previous
post takes the premise the "natural" way to encode fib is:
fib 1 = 1
fib x = fib (x - 1) + fib (x - 2)
But, lets take a look at the actual recursive schema for the fib sequence:
@VictorTaelin
VictorTaelin / lambdaSource.js
Last active October 19, 2015 16:58
Infer the source code of a pure JavaScript function.
// This infers the source code of a pure JavaScript function,
// that is, one consisting of nothing but single-argumented
// lambdas and applications, by applying it to values. Ex:
// lambdaSource(function(x){return x(x)}) == "λx.(x x)"
function lambdaSource(value){
var nextVarId = 0;
return (function normalize(value){
// This is responsible for collecting the argument list of a bound
// variable. For example, in `function(x){return x(a)(b)(c)}`, it
Verifying that +maiavictor is my blockchain ID. https://onename.com/maiavictor
@VictorTaelin
VictorTaelin / optlam
Last active April 18, 2016 17:07
simplified optlam
module.exports = (function(){
var lc = require("./lambda.js");
function Link(node, port){
this.node = node;
this.port = port;
};
var next_name = 0;
function Node(color){
this.n = ++next_name;
this.k = color;
#include <stdio.h>
void swap(int* a, int* b){
int temp = *a;
*a = *b;
*b = temp;
}
void swap4(int* a, int* b){
for (int i=0; i<4; ++i)
@VictorTaelin
VictorTaelin / lpu_concept.txt
Created May 7, 2016 11:52
explanation of the lpu concept
I didn't see your last message, sorry. So, grab a popcorn, I guess what I'll say will make you glad.
Remember I suggested the biggest issue with practical implementations of interaction combinators was that nodes that wire together become distant in memory? That is, as you keep allocating/freeing nodes, the heap becomes a pointer spaghetti. This takes the cache efficiency away and makes the algorithm very difficult to parallelize, because GPUs are made for vectors, not dynamically expanding graphs. So, while, in theory, we count the complexity of a λ-calculus term as the number of rewrite rules necessary to get to its normal form, in practice, the time it takes for the program to finish isn't within a constant of the number of necessary rewrite rules.
LPU solves this problem. The idea is that the memory should be split in millions of very small (currently, 128 bit) blocks, each one with a very simple processing core, and capable of communicating with its neighbors. A computer would consist of millions of th
#include <stdio.h>
// Uncomment the line with the desired difficulty you want to benchamrk.
//const int difficulty = 1; const int memory_nodes = 20; const int clocks = 19; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,3,-6,8,9,0,0,0,0,3,-7,-8,-9,0,0,0,0,3,-5,10,11,0,0,0,0,3,-10,12,13,0,0,0,0,3,-11,-12,-13,0,0,0,0,3,-3,14,-14};
const int difficulty = 2; const int memory_nodes = 48; const int clocks = 61; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,4,-6,8,9,0,0,0,0,3,-8,10,11,0,0,0,0,3,-7,-10,12,0,0,0,0,3,-9,-11,-12,0,0,0,0,3,-5,13,14,0,0,0,0,5,-13,15,16,0,0,0,0,3,-15,17,18,0,0,0,0,3,-14,-17,19,0,0,0,0,3,-16,-18,-19,0,0,0,0,3,-3,20,-20};
//const int difficulty = 3; const int memory_nodes = 60; const int clocks = 105; const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,5,-6,8,9,0,0,0,0,3,-8,10,11,0,0,0,0,3,-7,-10,12,0,0,0,0,3,13,14,-12,0,0,0,0,4,-9,15,-13,0,0,0,0,3,-15,-11,-14,0,0,0,0,3,-5,16,17,0,0,0,0,7,-16,18,19,0,0,0,0,3,-18,20,
@VictorTaelin
VictorTaelin / evidence.js
Last active August 15, 2016 21:52
As of 2016, manually inlining JS funcalls seemingly still beats engines by a large margin
// This is supporting evidence that manually inlining functions still beats
// most JS engines. There are three equivalent functions, `a`, `b` and `c`,
// with varying levels of manual inlining. On my tests, (node v5.9.1, 2016
// macbook 12"), `c` is approx. 17 times faster than `a`, and 3.5 times faster
// than `b`. (Edit: the difference increased further by avoiding .apply)
function V3(x, y, z){ return {x: x, y: y, z: z}; };
function Qt(x, y, z, w){ return {x: x, y: y, z: y, w: w}; };
function Cam(pos, rot){ return {pos: pos, rot: rot}; };
function rotate(q, v){
// (b:Nat), Vector3 -> BoundedNat (b*b)
function packUnitVector(sqrtBase, v){
var f = Math.sqrt(8*v[2]+8);
var u = (v[0]/f+0.5)||0.5;
var v = (v[1]/f+0.5)||1;
return Math.round(u*sqrtBase) + Math.round(v*sqrtBase)*sqrtBase;
};
// (b:Nat), BoundedNat (b*b) -> Vector3
function unpackUnitVector(sqrtBase, e){