Skip to content

Instantly share code, notes, and snippets.

@VictorTaelin
Last active January 5, 2024 14:55
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save VictorTaelin/911e3c0bdfaf75d3ad357e2bb07d73aa to your computer and use it in GitHub Desktop.
Save VictorTaelin/911e3c0bdfaf75d3ad357e2bb07d73aa to your computer and use it in GitHub Desktop.
thoughts
// THOUGHTS
//
// Consider FP: ∃f ∀a ∀b (f (mul a b) b) == b
//
// FP can be proven with a pair:
//
// - F: (a b: Nat) -> Nat
// - P: (a b: Nat) -> (F (mul a b) b) == b
//
// In the proof P, we pattern-match on 'a' and 'b'. Note that EVERY equality can
// be proven with `refl`, as long as all free variables have been consumed! That
// is, with an infinite pattern-match, P is always:
//
// P: ∀a ∀b (f (mul a b) b) == b
// = λa λb
// match a {
// succ: match b {
// succ: ...
// zero: match a.pred {
// succ: ...
// zero: refl
// }
// }
// zero: match b {
// succ: ...
// zero: refl
// }
// }
//
// Effectively, this proof generates equalities for every 'a' and 'b', which
// restrict and "sculpt" the shape of 'f'. For example, for a=4 and b=3, we
// would have, within the proof, some "refl" of the type:
//
// (f (mul 4 3) 3) == 4
// --------------------
// (f 12 3) == 4
//
// Now, if we then do the same with `F`; that is, we perform infinite
// pattern-matching on its inputs (which are `(mul a b)` and `b`), we would
// have, in the case of a=12, b=3, a:
//
// NOTE: ? F_12_3 == 4
//
// Thus, 'F' would be reconstructed as:
//
// F: Nat -> Nat -> Nat
// = λa λb
// match a b {
// 0 0: whatever
// 0 1: 0
// 0 2: 0
// ...
// 1 0: whatever
// 1 1: 1
// 1 2: whatever
// ...
// 2 0: whatever
// 2 1: 2
// 2 2: 1
// 2 3: whatever
// ...
// 3 1: 3
// ...
// 3 3: 1
// ...
// 4 1: 4
// 4 2: 2
// ...
// 4 4: 1
// ...
// 12 3: 4
// ...
// }
//
// In other words, `F` becomes a function that pattern-matches on all the
// infinite cases of `a` and `b`, and returns `a/b` when both are divisible (and
// "anything" otherwise, because in those cases, there will be no equality to
// respect).
//
// With this, *every* specifiable function can be automatically generated by the
// proposed algorithm! Just specify the function with a Sigma (∃) and it will be
// generated, in a simple and direct way. The only problem is that the generated
// function would not be very efficient. For example, a list sorting algorithm
// made in this way would basically perform a complete pattern-match of the
// list, returning the entire sorted list for each case:
//
// sort: List -> List
// = λxs match xs {
// [] => []
// [0] => [0]
// [1] => [1]
// [2] => [2]
// ...
// [0,0] => [0,0]
// [0,1] => [0,1]
// [1,0] => [0,1]
// [1,1] => [1,1]
// [2,0] => [0,2]
// [2,1] => [1,2]
// [2,2] => [2,2]
// ...
// }
//
// And so on. Therefore, the next question is: given a self-generated function,
// how to optimize it, creating a more efficient algorithm? The function half is
// a good example. If generated through the specification "half is the inverse
// of double":
//
// ? half: ∃f. ∀n. (f (double n)) == n
//
// The result would be something like:
//
// half = λn
// match n {
// 0: 0
// 1: whatever
// 2: 1
// 3: whatever
// 4: 2
// 5: whatever
// 6: 3
// ...
// }
//
// Which, in a more complete expansion, would have the following form:
//
// half = λn
// match n {
// zero: zero
// (succ p): match p {
// zero: whatever
// (succ p): match p {
// zero: (succ zero)
// (succ p): match p {
// zero: whatever
// (succ p): match p {
// zero: (succ (succ zero))
// (succ p): match p {
// zero: whatever
// (succ p): match p {
// zero: (succ (succ (succ zero)))
// (succ p): ....
// }
// }
// }
// }
// }
// }
// }
//
// It is not difficult to see that this function can be optimized by reusing
// shared "succs", as follows:
//
// half = λn
// match n {
// zero: zero
// (succ p): match p {
// zero: whatever
// (succ p): succ match p {
// zero: zero
// (succ p): match p {
// zero: whatever
// (succ p): succ match p {
// zero: zero
// (succ p): match p {
// zero: whatever
// (succ p): succ match p {
// zero: zero
// (succ p): ....
// }
// }
// }
// }
// }
// }
// }
//
// This function, in turn, can be simplified by realizing that it is a pattern
// to be repeated infinitely. That is, we can write half as:
//
// half = μf λn
// match n {
// zero: zero
// (succ p): match p {
// zero: whatever
// (succ p): succ (f p)
// }
// }
//
// It is interesting to note that, in an optimal evaluator, the less efficient
// 'half' function would naturally transform into the more efficient one.
@bmorphism
Copy link

abrupt learning is about finding those compact polynomial weights and representing them succinctly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment