Skip to content

Instantly share code, notes, and snippets.

@rplacd
Created July 30, 2016 17:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rplacd/952147d35c306f0f5700ef6a7ab33e82 to your computer and use it in GitHub Desktop.
Save rplacd/952147d35c306f0f5700ef6a7ab33e82 to your computer and use it in GitHub Desktop.
Sources to "NaturalDeduction_and_SequentCalculus.html".
Require Import Coq.Strings.String.
Require Import Coq.Lists.List.
(** * A taste of Curry-Howard
In particular, how do we show a λ-calculus term is valid proof for a logical proposition?
** An overview
To be more specific: I assume you've worked before with deduction systems that are given in the assumptions-over-conclusion notation. (If you've seen them stacked on top of each other in proof trees, so much the better.) In particular, if you understand a definition in that form of a relation like the has-type or evaluates-to relation, you should be set.
Throughout we make clear the distinction between metalogic - Coq - and logic.
Our logic is the implicational logic: atomic propositions and the connective of implication, [->]. Two formalisms of proofs ("proof systems") for this logic are then defined in the metalogic: natural deduction and the sequent calculus.
An interesting interlude follows: what distinguishes the two? Conversely, what unites the two - how come we aren't comparing apples to oranges? Equally interesting: all our proofs of logical statements - acts of reasoning with the rules that bind the proof system - are given by metalogical proofs, written in Coq's term language. What is a proof on the level of the logic itself?
Introducing the Curry-Howard correspondence answers this question, and restores the propositional logic to full ability: we can now give proofs in the level of the propositional logic, with the λ-calculus. We show that the ability to give logical proofs shouldn't change the metalogical proofs of the same statements.
*)
(** ** Utility definitions
We use lists to represent bunches of assumptions and conclusions; when working on them, we need to add elements to both the front and the back of lists. So we provide a uniform notation for both operations:
*)
Notation "a :+ B" := (cons a B) (at level 57, right associativity).
Notation "B +: c" := (app B (c :: nil)) (at level 58, left associativity).
(**
Note that in the operators, [:] is always on the side of the element, as in [::]; and [+] is on the side of the list, as in [++].
*)
(** ** The logic: propositional logic
We deal with implicational logic: atomic propositions, and implications between propositions ([->]). These are the terms of our logic. (It's another thing entirely to say how we can manipulate these terms - that comes next.)
*)
Inductive prop : Set :=
| Atom : string -> prop
| Impl : prop -> prop -> prop.
Notation "A => B" := (Impl A B) (at level 56, right associativity).
(** ** The formalisms of proof - natural deduction and the sequent calculus
We assume that what we want to do with a logic is, given a statement, try to derive a proof for it. "Informal" introductions to logic and proof usually conclude by offering a procedure that takes assumptions to conclusions: to show the conjunction of two statements, show each statement individually; and so on. It then should come to no surprise that both formalisms here distill the bookkeeping to the bare essentials of this process: a turnstile [|-] dividing, on the left, what is known; on the right, what is being concluded.
We should make the distinction between the logic and metalogic clear. The logic is the propositional calculus - it is being implemented or hosted. The metalogic is the type theory Coq encodes, a variant on "minimal logic". (But more on type theories being logics below!) So the metalogical proposition [A, B |- C] states that, by the rules of the proof system, [C] can be derived from [A] and [B]. Note that our proofs remain in the metalogic - we'll see how to lower down our proofs down to the level of the logic later.
Two systems are presented: natural deduction and the sequent calculus; we call them both proof systems. Both are defined by Gerhard Genzten in his classic _Investigations into Logical Deduction_. (Other formalisms of deduction are available - Russell and Whitehead's _Principia_ begins by defining deduction rules; so does Gottlob Frege in his _Begriffsschrift_.)
Can you see how they're reasonable as models of deduction? We'll see them proving the correctness of families of deductions below. We'll then compare the two - but there is no clear-cut duality between the two.
*)
Reserved Notation "A ]- B" (at level 90).
Inductive NatDed : list prop -> prop -> Prop :=
| N_Id : forall (L:list prop) (A:prop),
(A :+ L ]- A)
| N_Impl_intro : forall (L:list prop) (A B:prop),
(A :+ L ]- B) ->
( L ]- A => B)
| N_Impl_elim : forall (L:list prop) (A B:prop),
(L ]- A => B) -> (L ]- A) ->
(L ]- B)
| N_Interchange : forall (L M:list prop) (A B C:prop),
(L +: A ++ B :+ M ]- C) ->
(L +: B ++ A :+ M ]- C)
| N_Thinning : forall (L:list prop) (A B:prop),
(L ]- B) ->
(A :+ L ]- B)
| N_Contraction : forall (L:list prop) (A B:prop),
(A :+ A :+ L ]- B) ->
( A :+ L ]- B)
where "A ]- B" := (NatDed A B).
Reserved Notation "A }- B" (at level 90).
Inductive Sequent : list prop -> list prop -> Prop :=
| S_Id : forall (L M:list prop) (A:prop),
(A :+ L }- A :+ M)
| S_Impl_L : forall (L M:list prop) (A B:prop),
(L }- A :+ M) ->
(B :+ L }- M ) ->
(A => B :+ L }- M )
| S_Impl_R : forall (L M:list prop) (A B:prop),
(A :+ L }- B :+ M ) ->
(L }- A => B :+ M )
| S_Interchange_L : forall (L M N:list prop) (A B:prop),
(L +: A ++ B :+ M }- N) ->
(L +: B ++ A :+ M }- N)
| S_Interchange_R : forall (L M N:list prop) (A B:prop),
(L }- M ++ A :: B :+ N) ->
(L }- M ++ B :: A :+ N)
| S_Thinning_L : forall (L M:list prop) (A:prop),
(L }- M) ->
(A :+ L }- M)
| S_Thinning_R : forall (L M:list prop) (A:prop),
(L }- M) ->
(L }- A :+ M)
| S_Contraction_L : forall (L M:list prop) (A:prop),
(A :+ A :+ L }- M) ->
( A :+ L }- M)
| S_Contraction_R : forall (L M:list prop) (A:prop),
(L }- A :+ A :+ M) ->
(L }- A :+ M)
| S_Cut : forall (L M N O:list prop) (A:prop),
(A :+ L }- M) -> (N }- A :+ O) ->
(L ++ N }- M ++ O)
where "A }- B" := (Sequent A B).
(** A fine technical point involved here is the representation of lists of assumptions. As data types, what should we be able to do with them? Consider the options available to us:
- The set, the canonical representation. Sets don't work well as terms in types, though: term unification is not equivalence for sets. Unification does not take into account permutation - [{A, B, C}] does not unify with [{C, B, A}].
- The multiset: here, equivalence is _still_ not unification. However, multiple instances of an item may appear in a multiset, which unification does take into account: [{A, B} /= {A, A, B}]. Gallier's reference work, cited below, uses this feature in a more featureful logic, Girard's linear logic, where a class of assumptions can be used once (and only once.) We don't necessarily want to deal with duplicates in our logic, but it turns out to make for a conveniently-available datatype.
- The list, where equivalence is unification. But now we need some other way, on a level other than the datatype, of sanctioning the reordering of assumptions. We choose to add rules them on the level of the proof system. Here, "structural" inference rules for [Permutation] encode permutation. What about duplication or removal ("cut"), which we should also be free to do? We add separate rules for those, too. (These rules are given by Wadler, cited below.)
So we use the last representation.
*)
(*
- To the right of a sequent's arrow, is a _disjunction_ of possible conclusions. An arbitrary number of terms can now appear to the left *and* to the right of a turnstile, unlike systems of natural deduction.
How can we use this fact? We might admit the following inference rule for taking apart an "or":
X, Y, ... |- A \/ B
------------------- ? \/ R
X, Y, ... |- A, B
But there's a twist: strangely enough, This rule isn't defined.
A, ... |- Z B, ... |- Z
-------------------------- \/ L
A \/ B, ... |- Z
Why is this?
- Can we turn ? \/ R into a typing rule? This seems problematic: what is ??? in the following?
X, Y, ... |- Left a : A \/ B
---------------------------- ? \/ R
X, Y, ... |- a : A, ??? : B
To which you might object: why not just have a : A, B? But for symmetry's sake we'd have to allow the same thing on the left side of the turnstile. And what would it mean for a type two have type A and B? (I'm not saying that this is inherently wrong by inspection; just that it hasn't been worked out yet.)
- Isn't there a good proof-theoretic reason why not to admit this.
*)
(** ** What distinguishes natural deduction from the sequent calculus?
- A natural deduction turnstile [|-] can only have one conclusion; a sequent calculus deducution can have a disjunction of conclusions: the logical or of many propositions. Unfortunately the propositional logic doesn't have a logical [\/] to interact with the metalogical [\/], and in fact we can remove it out of our formalisation. But we won't - this difference counterbalances another. And read Wadler's _Call-by-Value is dual to Call-by-Name_: as both logics grow, this core difference pushes their development in entirely different directions.
- For each connective (and here we only have [->]), each proof system provides qualitatively different rules.
- Rules in natural deduction describe transformations that go strictly from the left to the right of a [|-]. Rules in the sequent calculus occur in pairs, each sticking to one side of the turnstile: we have a [S_Impl_L] rule and a [S_Impl_R] rules that respectively work on the left and the right.
- Rules in natural deduction occur in pairs, corresponding to introduction and elimination (or constructor and accessor). In the sequent calculus, we only have construction rules.
- The sequent calculus has a [S_Cut] rule; natural deduction does not. Gallier (cited below) notes that this isn't required for propositional logic. In more full-featured logics, though, a Cut rule is required to to show the consistency of the sequent calculus! (This due to Wadler.)
We find out that each comparison turns out asymmetrically: either natural deduction has a pair of rules, and the sequent calculus does not, or vice-versa. And then, curiously enough, once all the comparisons are taking into account the asymmetries balance out. Are these systems somehow duals - two sides of a symmetry? I don't have a meaningful conclusion there.
Another question to ask: how do these asymmetries end up preserving the "equivalence" of both the systems? We'll delay the answer for a bit.
- These asymmetries end up distinguishing the style of deductions done in each proof system.
- A consequence of the first comparison: a user of natural deduction looks at propositions on the left of the [|-] in order to construct propositions on the right of the [|-]. A user of the sequent calculus must stick on one side: looking at propositions on the left, creating propositions on the left; and vice-versa.
- And of the second. A user of natural deduction can both create and take apart connectives. A user of the sequent calculus can only create them!
The apparent missing features of each system balance each other out: a user of a sequent calculus can only create [->]s, but he can do so in order to create an assumption or a conclusion. A user of the natural deduction can either create or destroy a [->], but he can only do so to create a conclusion.
Back to the question: are the systems equivalent in power? Yes - Gentzen has shown it; this result was used to show that natural deduction could be translated into the sequent calculus (and vice-versa), and so that both systems were either consistent or inconsistent together. (See Wadler for more.)
*)
(* ** Using the proof systems
What are some statements in this logic that we can show? These are fairly intuitive - but, as you'll see, they don't come immediately from the definitions. But be careful: we aren't giving proofs in propositional logic, but in Coq's logic; the proofs are given in Coq's OCaml-like language. We'll see how to endow propositional logic with its own proof language below.
In fact, the logical statements below have their actual propositions abstracted out. But this abstraction is done by a metalogical quantifier!
*)
Lemma N_cooccurrence_is_implication : forall (A B:prop),
(A :: B :: nil) ]- (A=>B).
Proof.
intros A B.
pose (N_Id (A :: nil) B) as P.
apply N_Interchange with (L:=nil) in P; simpl in P.
apply N_Impl_intro in P.
apply N_Thinning with (A:=A) in P.
apply P.
Qed.
Lemma S_cooccurrence_is_implication : forall (A B:prop),
(A :: B :: nil) }- (A=>B :: nil).
Proof.
intros A B.
pose (S_Id nil nil B) as P.
apply S_Thinning_L with (A:=A) in P.
apply S_Impl_R in P.
apply S_Thinning_L with (A:=A) in P.
apply P.
Qed.
Lemma N_modus_ponens : forall (A B:prop),
(A=>B :: A :: nil) ]- B.
Proof.
intros A B.
pose (N_Id nil A) as P.
pose (N_Id nil (A => B)) as Q.
eapply N_Impl_elim.
- (* We need to show A => B, A ]- A => B *)
apply N_Interchange with (L:=nil); simpl.
apply N_Thinning.
apply Q.
- (* We need to show A => B, A ]- A *)
apply N_Thinning.
apply P.
Qed.
Lemma S_modus_ponens : forall (A B:prop),
(A=>B :: A :: nil) }- (B :: nil).
Proof.
intros A B.
apply S_Impl_L.
- (* We need to show A }- A, B *)
apply S_Interchange_R with (M:=nil); simpl.
apply S_Thinning_R.
apply S_Id with (A:=A).
- (* We need to show B, A }- B *)
apply S_Interchange_L with (L:=nil); simpl.
apply S_Thinning_L.
apply S_Id with (A:=B).
Qed.
(** Note how awkward it is in Coq's tactic language to build a proof term by first giving smaller statements, and then constructing larger ones from them. Simultaneously, note how much easier it is to do the opposite: give a single large statement, and then fill in its arguments: Coq will allow you to give the arguments as late as you like, representing the obligation to do so as subgoals.
The relevant proof-theoretic distinction to make here is between forward and backward proof: forwards - syntactically - from smaller bindings into the expression in tail form; backwards from statements into subgoals. We say that Coq assists backwards proof more than it does forward proof.
Forwards proof in Coq requires working with local bindings. The tactics [pose] and [remember] then become useful - but they require you to give their values at once. You can't delay giving them by turning them into subgoals; you can't use unification to infer implicit (curly-bracketed) arguments, either. Unless you use the tactic [assert]: but this requires you immediately give the type of the term instead.*)
(**
* Adding a language of proof onto a language of propositions, by Curry-Howard
It's hard to introduce just one link the Curry-Howard correspondence makes, and then use that alone to justify the Correspondence. So we'll introduce many of them as once, glossing them as required. We have
- propositions as types ([A => B], the function, is [A => B], [A] implying [B]);
- proof terms as terms in a computational calculus - in this case, the λ-calculus.
We when talk about "terms", we talk about the elements of the one language that acts both as proof language and computational language. Both "proof language" and "computational language" will be used interchangeably.
- the type-of relation binding proof terms to their propositions in inference rules (equally, terms to their types in typing rules);
Hang in there - but what is a proof term? It is how we give, in the logic itself, proof for logical statements in the logic. Up until now we have given them in the _metalogic_: a Coq proof is a Coq function. So our propositional logic must rely on the metalogic for its proofs - until we introduce a proof language native to the logic itself.
[NT_cooccurrence_is_implication], defined below, is a metalogical proof (like the rest of the proofs here) for a metalogical proposition. Here is its proof term:
<<
Coq => Print NT_cooccurrence_is_implication.
--------------------------------------------
NT_cooccurrence_is_implication :
forall (A B:prop) (a:id) (b:term),
(Var a:A :: b:B :: nil) ]= (\a->b:A => B)
=
fun (A B : prop) (a : id) (b : term) =>
let P := NT_Id ((Var a : A) :+ nil) (b : B) in
(fun P0 : nil +: Var a : A +: b : B ]= b : B =>
(fun P1 : (b : B) :+ nil ]= \ a -> b : A => B =>
(fun P2 : (Var a : A) :+ (b : B) :+ nil ]= \ a -> b : A => B => P2)
(NT_Thinning ((b : B) :+ nil) (Var a : A) (\ a -> b : A => B) P1))
(NT_Impl_intro ((b : B) :+ nil) A B a b P0))
(NT_Interchange nil nil (b : B) (Var a : A) (b : B) P)
: forall (A B : prop) (a : id) (b : term),
(Var a : A) :+ (b : B) :+ nil ]= \ a -> b : A => B
>>
But read the metalogical proposition more closely: it proposes the correctness of a logical proof. What is the logical proof term? Metalogically assuming [Var a: A] and [b: B], it is [\a->b : A=>B], the logic-level proof term. (More on how a λ-calculus term can be a proof below.) We no longer have to prove things in the metalogic! It's scandalous: up until now, instead of giving (logical) proof, we've instead just had to give (metalogical) proof that a (logical) proof was possible! But now we're giving (metalogical) proof that a certain (logical) proof is correct.
Now that we know what a proof term is, we further complete the analogy.
-type checking is proof checking, and
- term evaluation corresponds - to what? To a process called proof normalisation, defined so as to simplify proof terms. And isn't evaluation itself a well-defined form of simplification?
A sidenote: note that term evaluation is sometimes formalised as term _normalisation_, a relation mapping terms into a a subset of "normal forms". A text like Peirce et al.'s _Software Foundations_ covers this in the gory detail required for Coq.
Also note that proof normalisation turns statements on the set of proofs into a statement on a far smaller set: the set of normalised proofs. (Due to Gallier.) This technique helps Gentzen as he shows the consistency of the sequent calculus, and by equivalence natural deduction. (Due to Wadler.)
*)
(** ** The term language, and typing the term language - or the proof language, and the proofs of a proposition
Now let's encode the concepts introduced above - proof language, the type-of (or is-proof-of) relation, and so on - in the metalogic that is Coq.
The type-of relation becomes, in the metalogic, a proposition.*)
Inductive typing {proof:Set} : Set :=
| GiveType : proof -> prop -> typing.
Notation "a : A" := (GiveType a A) (at level 57).
(** Variable identifiers are a type of their own. (The opaqueness declaration prevents Coq from evaluating [id] to [string]. *)
Definition id : Set := string.
Opaque id.
(** Now the computational language - or the proof language.
Note that one proof language suffices for both proof systems, natural deduction and the sequent calculus. This language is - when given with its "usual" type system - the simply-typed λ-calculus. It seems the type language heavily determines the term language.
Could it be that the type language is _sufficient_ to determine the term language? This is true when we think of the term language as a proof language, but clearly not as a computational language, or a language to code in the large with. Conversely, it can hint at us how to simplify a language: both Haskell and OCaml inherit the features that simplify them (expression-orientation, parametric polymorphism, type inference) from their ancestor language ML - a Meta-Language built to express metalogical proofs of features of languages.
*)
Inductive term : Set :=
| Var : id -> term
| Lam : id -> term -> term
| App : term -> term -> term.
Notation "\ x -> a" := (Lam x a) (at level 44, x at level 44).
Notation "f @ a" := (App f a) (at level 44, right associativity).
Definition tterm : Set := typing (proof:=term).
(**
Finally, we give the same proof systems above. But now note how, as they manipulate propositions, they simultaneously manipulate proof terms attached with a [:] to propositions - more on proof term manipulation below...
*)
Reserved Notation "A ]= B" (at level 90).
Inductive NatDedTypings : list tterm -> tterm -> Prop :=
| NT_Id : forall (L:list tterm) (A:tterm),
(A :+ L ]= A)
| NT_Impl_intro : forall (L:list tterm) (A B:prop) (a:id) (b:term),
(Var a:A :+ L ]= b : B ) ->
( L ]= \a->b : A=>B)
| NT_Impl_elim : forall (L:list tterm) (A B:prop) (f a:term),
(L ]= f : A => B) -> (L ]= a : A) ->
(L ]= f@a : B)
| NT_Interchange : forall (L M:list tterm) (A B C:tterm),
(L +: A ++ B :+ M ]= C) ->
(L +: B ++ A :+ M ]= C)
| NT_Thinning : forall (L:list tterm) (A B:tterm),
(L ]= B) ->
(A :+ L ]= B)
| NT_Contraction : forall (L:list tterm) (A B:tterm),
(A :+ A :+ L ]= B) ->
( A :+ L ]= B)
where "A ]= B" := (NatDedTypings A B).
Reserved Notation "A }= B" (at level 90).
Inductive SequentTypings : list tterm -> list tterm -> Prop :=
| ST_Id : forall (L M:list tterm) (A:tterm),
(A :+ L }= A :+ M)
| ST_Impl_L : forall (L M:list tterm) (A B:prop) (a f: term),
(L }= a:A :+ M) ->
(f@a:B :+ L }= M) ->
(f:A=>B :+ L }= M)
| ST_Impl_R : forall (L M:list tterm) (A B:prop) (a f:term),
(a:A :+ L }= f@a : B :+ M ) ->
(L }= f : A=>B :+ M )
| ST_Interchange_L : forall (L M N:list tterm) (A B:tterm),
(L +: A ++ B :+ M }= N) ->
(L +: B ++ A :+ M }= N)
| ST_Interchange_R : forall (L M N:list tterm) (A B:tterm),
(L }= M ++ A :: B :+ N) ->
(L }= M ++ B :: A :+ N)
| ST_Thinning_L : forall (L M:list tterm) (A:tterm),
(L }= M) ->
(A :+ L }= M)
| ST_Thinning_R : forall (L M:list tterm) (A:tterm),
(L }= M) ->
(L }= A :+ M)
| ST_Contraction_L : forall (L M:list tterm) (A:tterm),
(A :+ A :+ L }= M) ->
( A :+ L }= M)
| ST_Contraction_R : forall (L M:list tterm) (A:tterm),
(L }= A :+ A :+ M) ->
(L }= A :+ M)
| ST_Cut : forall (L M N O:list tterm) (A:tterm),
(A :+ L }= M) -> (N }= A :+ O) ->
(L ++ N }= M ++ O)
where "A }= B" := (SequentTypings A B).
(** A side note: [ST_Impl_]L was once defined as
<<
| ST_Impl_L : forall (L M:list tterm) (A B:prop) (a:id) (b:term),
(L }= Var a:A :+ M) ->
( b:B :+ L }= M) ->
(\a->b : A=>B :+ L }= M)
>>
[ST_Impl_R] was, too, defined analogously.
This is a plausible rule, but it didn't make the proof below of [ST_modus_ponens] go through without substantial changes: [ST_Impl_L] couldn't be used at the same point it was used when showing [S_modus_ponens]. So we conclude: metalogical proofs of a logical proposition shouldn't change in order to give logical proofs of the same proposition!
*)
(** ** Using the proof systems to give proofs
We promised above to clarify how terms in the λ-calculus can be interpreted as proofs. There is, in fact, a canonical interpretation of this process - the Brouwer-Heyting-Kolgomorov interpretation of a constructive proof of a proposition: for the conjunction of two propositions, give a proof for both of them, and so on...
This interpretation is, more precisely, given for intuitionistic logic. It is a superset of our propositional logic, adding extra connectives (logical [/\], and so on) and extra reasoning principles (a representation of falsity, and an axiom encoding the principle of explosion). See Gallier cited below for further details. But we assume the interpretation doesn't go astray for propositional logic (without giving formal proof of this), since all statements that can be proved in the propositional logic can be proved in intuitionistic logic as well. The contrapositive clearly is true as well - if something can't be proved in intuitionistic logic, it can't be shown in propositional logic., either.
The exact sequence inference rules we use to deduce these propositions remain the same - not surprising, since the inference rules themselves have remained the same, rule for rule! We've only just changed uses of the type [prop] to the type [tterm].*)
Lemma NT_cooccurrence_is_implication :
forall (A B:prop) (a:id) (b:term),
(Var a:A :: b:B :: nil) ]= (\a->b:A => B).
Proof.
intros A B a b.
pose (NT_Id (Var a:A :: nil) (b:B)) as P.
apply NT_Interchange with (L:=nil) in P; simpl in P.
apply NT_Impl_intro in P.
apply NT_Thinning with (A:=Var a:A) in P.
apply P.
Qed.
(** As we proceed through the steps of the metalogical proof, we find that we're slowly building up a logic-level program to the left of the [:]. By the time we reach the [Qed], the program is not only finished, but type-correct given the assumed typings to the left of the turnstile. So we add one more Curry-Howard consequence:
- proof term derivations are program transformations.
As we've said in an earlier preview: here, the program we've derived in our natural-deduction system is given in the type. It is [\a->b : A=>B], having metalogically assumed [Var a:A] and [b:B].
Here are the rest of the proofs, equivalent to the ones given for the proofless proof systems.*)
Lemma ST_cooccurrence_is_implication :
forall (A B:prop) (a:id) (b:term),
(Var a:A :: b:B :: nil) }= (\a->b:A=>B :: nil).
Proof.
intros A B a b.
pose (ST_Id nil nil (b:B)) as P.
apply ST_Thinning_L with (A:=Var a:A) in P.
apply ST_Impl_R in P.
apply ST_Thinning_L with (A:=Var a:A) in P.
apply P.
Qed.
Lemma NT_modus_ponens : forall (A B:prop) (f a:term),
(f:A=>B :: a:A :: nil) ]= f@a:B.
Proof.
intros A B f a.
pose (NT_Id nil (a:A)) as P.
pose (NT_Id nil (f: A=>B)) as Q.
eapply NT_Impl_elim.
- (* We need to show, with terms removed, A => B, A ]- A => B *)
apply NT_Interchange with (L:=nil); simpl.
apply NT_Thinning.
apply Q.
- (* We need to show, with terms removed, A => B, A ]- A *)
apply NT_Thinning.
apply P.
Qed.
Lemma ST_modus_ponens : forall (A B:prop) (f a:term),
(f:A=>B :: a:A :: nil) }= f@a:B :: nil.
Proof.
intros A B f a.
Print ST_Impl_L.
apply ST_Impl_L with (a:=a).
- (* We need to show a:A }- a:A, f@a:B *)
apply ST_Interchange_R with (M:=nil); simpl.
apply ST_Thinning_R.
apply ST_Id.
- (* We need to show f@a:B, a:A }- f@a:B *)
apply ST_Interchange_L with (L:=nil); simpl.
apply ST_Thinning_L.
apply ST_Id.
Qed.
(**
That should be the end of the taster - short, but broad and precise.
A final parting note - we take the ability to use the metalogic to check logic-level proofs for granted. But -
- This comes at the cost of spelling out our proofs in painful detail.
- Clearly the vast majority of proofs aren't checked in any metalogic, and can't immediately be converted in a checkable form: they are given at a more abstract level. ("Proposition: if [a:A], [eval a : A]. Proof. By induction on terms of the language; the induction hypothesis is only used in the cases where...")
- And who checks the metalogic's proofs, anyhow?*)
(** ** Credits
I knit together a few observations in this work. None of them are mine.
Some of the ideas used above are considered standard. (The named ones clearly are.) For these ideas the citations given below are references, rather than introductions, to them.
- The sequent calculus as an alternative to natural deduction.
I originally learned of this from Wadler's _Call-by-Value is Dual to Call-by-Name_: it shows what Gentzen "designed" both proof systems for, and what he was able to do with either of both systems.
- Sometimes you get a language from its typing rules
Staring at the typing rules in order to define your computational language is a process I learned from Gallier's _Constructive Logics Part I: A Tutorial on Proof Systems and Typed Lambda-Calculi_. (I've never found a Part II.) It deals with logics up to intuitionistic logic, and gives the corresponding lambda-calculi as increase with complexity.
After reaching that point, he changes to focus instead on proof theoretic results - a pity, since the real excitement starts when you try to identify calculi for classical logic (Peirce's law as [call/cc]), or for linear logic, for example. A series of lecture notes by Frank Pfennig et al. outlines this process, but I've lost the reference.
- If your lists of assumptions are lists, rather than sets, you'll have to give permutation rules in the proof system.
Due to Wadler's paper above, again. Gallier uses multisets without giving a good technical description of them.
*)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment