Skip to content

Instantly share code, notes, and snippets.

@daig
Last active November 4, 2020 15:07
Show Gist options
  • Save daig/ffb68fbf79ae65bf76bf638074ee52f3 to your computer and use it in GitHub Desktop.
Save daig/ffb68fbf79ae65bf76bf638074ee52f3 to your computer and use it in GitHub Desktop.

Coherentism

I've been thinking a lot about primitives for epistemic justification. In particular Foundationalism vs Coherentism, their relationship to induction vs coinduction, with Productivity as the Coherentist analog to grounded arguments, ensuring that each step (of a potentially cyclic proof) does some useful and necessary work. Foundationalism is the standard for formal logic, and has gotten us quite far.

"According to the foundationalist option, the series of beliefs terminates with special justified beliefs called “basic beliefs”: these beliefs do not owe their justification to any other beliefs from which they are inferred."

It accounts for how truth is preserved in composition, flowing from true assumptions and avoiding circular reasoning. It has deep connections with computation via curry-howard where well-founded constructive proofs correspond to terminating computable functions.

On the other hand, it appears to have limits - raising an obvious question of how to prefer some axioms over others. In practice too there seem to be a vast class of knowledge unaccounted for - mostly in the domain of psychology and medicine, but also behind the scenes of every successful visionary, institutional knowledge, artistic muse, and pre-paradigmatic field

Mathematicians for example spend years carrying strong intuitions, letting them solve a host of concrete problems, before being able to write down a general grounded theory of those problems. The foundationalist view is that such intuitions are clever heuristics which must still be grounded to be justified - but why should this be the case when intuition is in some sense prior to propositional knowledge. At its worst, foundationalistm leads to Goodharting on legible arguments to the exclusion of obvious intuitive objections.

"Coherentism, however, proposes a 'holistic' view of justification. On this kind of view, the primary bearer of epistemic justification is a system of beliefs."

This is a compelling stance for natural reasoning via self-trust, where our only foundation is embedded coherence by birthright. Care must be taken however to avoid circular reasoning, and the difficulty of doing so is part of why coherentism has fallen out of favor.

Just as every foundationalist justification must terminate on basic beliefs, coherentist justifications have an analogous soundness criteria:

  • A mesh of interlocking beliefs, each part of which is both necessary (produces a part of the meaning), and coherent (connects with all other parts without contradicting any other part).

This may seem like an arbitrary criteria, but it follows directly as a complement to foundationalist justification by analogy to mathematical logic, and turns out to be exactly what you need to avoid circular reasoning in a very precise sense.


Coinduction

Foundationalist justifications map via curry-howard to inductive proofs, and well-foundedness translates to termination of such proofs. Correspondingly, Coherentist justifications map to coinductive proofs, and well-foundedness translates to productivity of such proofs. Whether a statement is inductive or coinductive is determined by the structure of the statement itself, and so inherits a natural justification.

A Toy Example:

Consider the inductive type of (Peano) natural numbers data Nat = Z | S Nat (A natural number is either zero or the successor of some natural number) It inherits a natural inductive operation which uniquely characterizes all functions out of it.

induct :: forall r. (r , r -> r) -> (Nat -> r)
induct (init, step) = recurse where
  recurse Z = init
  recurse (S n) = step (recurse n)

where we can prove that every function of type Nat -> r can be written as an application of induct to some inductive step function step :: r -> r and initial return value init :: r

notice that we must compute the recursive call recurse n before making the inductive step, relying on the fact that we'll eventually reach the base case Zero to ground out in our basic belief init.

On the other hand consider infinite streams of data: data [a] = a : [a] with no base case at all! (A stream of type a is a head element and a tail stream) Nevertheless we can have well-formed data of this form - rather than giving it by cases, we must describe a process to generate the entire stream:

unfold :: (a -> a, a) -> [a]
unfold (step, init) = init : unfold (step, step init)

which uniquely describes all ways of constructing streams similarly to how induct characterizes all ways of destructing naturals. For example: numbers :: [Nat] numbers = unfold (S , Z) creates the infinite stream 0 : 1 : 2 : 3 : 4 : ....

notice that, in contrast to induct, unfold produces a part of the output (init :) before the recursive call, so even if the unfold process never terminates (it doesn't), we produce some finite amount of output for every finite step. This property is called productivity, dual to the notion of termination for inductive structures. It is exactly what is needed to rule out circular reasoning (where we spin forever without producing any intermediate results), just as termination is exactly what we need to rule out groundless inductive foundations.

On Circular Reasoning

Note that "avoiding circular reasoning" doesn't mean "avoiding loopy proofs" - indeed every coherentist proof necessarily loops back on itself, relying on its internal consistency for foundation. Sometimes looping back yields a contradiction, and the proof fails. The structural form of a coinductive proposition P guarantees that every implication (branch of the data) gets considered (in the Stream case, just the head and the tail), so that any difficulty that would arise causing P not to hold would manifest itself in the attempt to prove the coinductive step.

For example, we might like to prove that every element of the stream 0 : 1 : 0 : 1 : ... is less than 2. To do so, we need only prove that 0 and 1 are less than 2 (trivial), and coinductively rely on the fact that the stream loops back on itself to prove the remainder.

This form of reasoning looks a lot like transfinite induction, but I argue it's naturally more coherentist - and transfinite induction is a foundationalist attempt to squeeze it into that setting - complicating it in the process. Despite its intuitive naturality, formal coinduction is seen as more complex than induction, in part because of a forward bias asymmetrically taking proofs as more fundamental than refutations.

"CoKnowledge "

Consider the slogans for induction and coinduction: "A property holds by induction if there is good reason for it to hold" "A property holds by coinduction if there is no good reason for it not to hold"

seen this way, coinduction is most naturally about the absence of refutation. In polarized inear logic, for an epistemic position P := (P⁺,P⁻) (interpreted as "what you'd accept as evidence", and "what you'd accept as counter-evidence") there is a distinction between

immediate (linear) proofs      ---- P triv   := proof of P⁺
immidiate (linear) refutations ---- P absurd := proof of P⁻
justified refutations          ---- P false  := ¬ P triv
justified proofs               ---- P true   := ¬ P absurd

where in addition to the usual intuitionistic derived negation ¬ P := P → ⊥ there's a primitive dualization P† := (P⁻,P⁺) swapping the position.

So induction (and foundationalism) is about judgements "P triv" while coinduction (and coherentism) is about judgements "P true", which include many more dynamic inhabitants.

Standard (intuitionistic) Logic has gotten by so far by restricting itself to static structure (triv), banishing refutation to second-class (false). Linear logic by contrast is fundamentally about concurrent interacting resources moreso than platonic absolutes, and so more natural for a theory of embedded reasoning which is necessarily dynamic in the world. Similarly, in practice coinduction is most often used to prove properties of automata such as state machines, distributed protocols, and formal languages.

Refutation-based reasoning is still poorly understood, but corresponds to metasystematic proof search, where primitive refutations are computationally interpreted as continuations, describing how to "backtrack" your process in case of contradiction, to root out incoherent assumptions.


Synthesis

Neither story is complete on its own - and my aim is towards a synthesis which can describe the full epistemic capacity of intellect. including symbol grounding and reflection

Consider the complementary progress of knowledge:

  • In math, we ride coherent intuition until we can distill a foundational theory.
  • In science, we test hypothetical models until we can falsify them with empirical data (or fail to, so strengthening our beliefs).
  • This empirical data is then used as intuition fuel to distill the next theory, and so on.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment