Skip to content

Instantly share code, notes, and snippets.

View stuhlmueller's full-sized avatar

Andreas Stuhlmüller stuhlmueller

View GitHub Profile
@stuhlmueller
stuhlmueller / linear-logic.md
Last active October 13, 2023 19:46
GPT-4 explains linear logic, proof nets, and the geometry of interaction

User: Can you explain proof nets and the geometry of interaction to me in crisp, clear terms?

Assistant: Sure, I'll do my best to explain these concepts in a simple way.

  1. Proof Nets: Proof nets are a graphical representation of proofs in linear logic. They were introduced to overcome the problem of redundancy in the sequent calculus representation of proofs. In simple terms, proof nets are a way to represent logical proofs as graphs, where nodes represent logical operations and edges represent the flow of information. They provide a way to check the correctness of a proof in a more efficient manner than traditional methods.

  2. Geometry of Interaction (GoI): The Geometry of Interaction is a mathematical framework that was introduced to understand the dynamics of computation in linear logic and other computational systems. It provides a geometric interpretation of computation, where computations are seen as movements in a geometric space.

@stuhlmueller
stuhlmueller / codex-refactor.el
Last active February 20, 2022 09:55
OpenAI Codex for refactoring in Emacs
(defun codex-refactor (&optional b e)
(interactive "r")
(shell-command-on-region b e "codex-refactor.py" nil nil))
@stuhlmueller
stuhlmueller / codex.el
Last active May 23, 2022 09:40
OpenAI Codex in Emacs
(defun codex (&optional b e)
(interactive "r")
(shell-command-on-region b e "codex.py" nil nil))
"""
Converts a directory with Rev.com transcripts into the dataset format required by Elicit.
Format for transcripts:
https://www.rev.com/transcript-editor/Edit?token=12312
Jane Doe:
Good morning. Thank you everyone for joining our webinar. My name is Jon Doe.

Elicit tasks as fold and unfold

Many of the tasks we care about either expand a tree by one layer (add factors, subquestions, datasets) or summarize a tree layer into one or a few nodes (summarize, cluster, rank).

This raises two questions:

  1. Is there an existing way to characterize these tasks so that we can make an app that supports all and only the ones we want?
  2. Language models have a finite context window. How can we clearly think about tasks that operate on arbitrarily large inputs?

Technically, we can think of expansion tasks as unfolds (creating structure) and summarization-like tasks as folds (reducing structure). They're defined like this:

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
var model = function() {
var x = uniform(0,1);
var yERP = Enumerate(function(){
return flip(x);
})
var y = sample(yERP);
return x + y;
}
MCMC(model, { kernel: { HMC: { steps: 10, stepSize: 1 }},
@stuhlmueller
stuhlmueller / mcmc.js
Last active September 18, 2015 16:39
function MCMC(s, k, a, wpplFn, options) {
var options = _.defaults(
_.clone(options),
{ samples: 100,
kernel: MHKernel,
lag: 0,
burn: 0 });
var log = function(s) {
Verifying I am +stuhlmueller on my passcard. https://onename.com/stuhlmueller

Keybase proof

I hereby claim:

  • I am stuhlmueller on github.
  • I am stuhlmueller (https://keybase.io/stuhlmueller) on keybase.
  • I have a public key whose fingerprint is E5E1 83AF 5944 BC22 E0D9 65E9 3E9E EE48 AE46 5D41

To claim this, I am signing this object: