Skip to content

Instantly share code, notes, and snippets.

@codyroux
Created July 5, 2021 19:14
Show Gist options
  • Save codyroux/1a5f80a4bbe04ebd6d1c7aeb99687d47 to your computer and use it in GitHub Desktop.
Save codyroux/1a5f80a4bbe04ebd6d1c7aeb99687d47 to your computer and use it in GitHub Desktop.

Confluence. It's not just a place where you can complain about colleagues to other colleagues.

It's also a property that is quite useful in many areas of CS!

As a reminder, a (non-deterministic) reduction relation --> is confluent if for every (multi-step) "peak" u *<-- t -->* v (I use -->* for the multi-step version of -->) there is a corresponding "valley": u -->* w *<-- v.

This basically says that your computation can't be "too" non-deterministic: there's always a way to reconcile disagreements between different choices.

Here are a couple of examples that suggest this might be a useful property:

  • In good ol' equational reasoning, if you have a bunch of equations like x*x = x, and x * 1 = x etc, you're in good shape if you can turn all those equations into a bunch of rewrite rules of the form x * 1 --> x. If, in addition, your system of rules turns out to be confluent and terminating, you've got a decision procedure for your theory! To check if two expressions (possibly containing variables) are equal, just rewrite them as much as possible, and compare the normal forms: if they are identical, then you have equal terms! Otherwise, you have proven that they cannot be equal in general.

  • In type theory, it's extremely useful to have confluence if you have type level equations (or computation): In general you will lose consistency, or even type-safety if you don't! Here's a fun example where it fails: https://gitlab.haskell.org/ghc/ghc/-/issues/1496.

  • Probably the most intuitive example is in concurrency: you have a bunch of concurrent actions, and they may trigger in some unspecified order: say you're modifying a database of users. There are some actions which must come before others: say, the creation of a user must come before you try to read any data associated to them. But this means you need to synchronize those actions, to ensure the "correct" order, and lose a lot of the advantage of concurrency! Other actions, such as say, adding distinct users into the database, feel like they can happen in any order, with only mild "corrective" actions to be done to make them truly order-independent. Basically the whole theory of concurrency is how to define things such that we can have these "confluence-like" properties.

  • category theory.

Ok, so confluence is useful. How do we prove that a system is, indeed, confluent? One pretty obvious way is to show that each one step peak reduces to a one step valley: if u <-- t --> v then u --> w <-- v for some w.

But that's asking for a lot: even the kindly lambda calculus doesn't have this property! Take

t = (\x. x x)((\y. y) z) u = ((\y. y) z)((\y. y) z) v = (\x. x x) z

You can see that the reduction t --> u "injures" the corresponding t --> v reduction, requiring two extra reductions from u to reach the common w (in this case (z z)).

We have here a weak confluence property: for every one-step peak u <-- t --> v there is a many step valley: u -->* w *<-- v.

But this isn't enough to ensure confluence! Observe this weakly confluent system:

c <-- a <--> b --> d

It is not confluent! Intuitively, "injured" reductions may be further injured in subsequent peaks, to the extent that they may never recover to end up at the same w.

Generally the way to prove that a weakly confluent system is, indeed, confluent, is to prove that "injuries are bounded", that is, every peak will give rise to sub-peaks that are somehow "less harmful".

Here is an example of a theorem that exploits this:

Neuman's lemma:

If --> is a terminating relation, that is also weakly confluent, then it is confluent.

The proof is a fun "diagram chase", of the kind category theorists seem to enjoy as part of a healthy breakfast.

We proceed by well-founded induction on the --> relation, that is, we consider a peak u *<-- t -->* v, and assume that confluence holds for every term reachable from t. I adore such proofs by the way.

Let's split the peak into u *<-- u0 <-- t --> v0 -->* v (we're omitting cases where the -->* reduction takes 0 steps, but those cases are trivial).

Now by hypothesis we have u0 -->* w0 *<-- v0. This creates two new peaks u *<-- u0 -->* w0 and w0 *<-- v0 -->* v but we can apply the IH to these! That gives us valleys:

u -->* w_u *<-- w0 and w0 -->* w_v *<-- v and we can apply the IH one last time to w0 to get: w_u -->* w *<-- wv which is the final valley we wanted between u and v. Whew!

Ok this is cool, but the lambda-calculus doesn't terminate, and concurrent systems are usually designed to run forever, so this lemma doesn't feel like it helps much.

That's not true however! If you exclude reductions from so-called created redexes from the lambda-calculus, that is redexes that are formed by substituting x with a lambda in a term of the form x u, you can show (but it's not trivial!) that reduction is terminating!

But this contains the usual one-step beta, since all existing beta reductions are not created (they already exist!). And if a given reduction --> is included in another reduction, say ~~>, that is both confluent and contained itself in -->*, then that automatically implies confluence for -->! The proof of this is quite easy, try it out!

Finding such a terminating "intermediate" relation is a nice trick: a lot of concurrent systems will have such a reduction, where you get to run all actions until a "synchronization point". So the lemma is not so useless after all!

But ok, we want to generalize this idea of "finite injury", to apply in the most general possible setting. Well it turns out someone very clever figured out how to do this in the 90s! The technique (which is really a theorem, but it's called a technique for some reason) is van Oostrom's "Decreasing Diagrams Method". It can be used to generalize almost all known methods to prove confluence!

Maybe I'll explain it at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment