Skip to content

Instantly share code, notes, and snippets.

@flickyfrans
Created October 17, 2014 21:22
Show Gist options
  • Save flickyfrans/4a50b76447724d88c5b3 to your computer and use it in GitHub Desktop.
Save flickyfrans/4a50b76447724d88c5b3 to your computer and use it in GitHub Desktop.
Some chapters from the Software Foundations book.
(** * Hoare2: Hoare Logic, Part II *)
Require Export Hoare.
(* ####################################################### *)
(** * Decorated Programs *)
(** The beauty of Hoare Logic is that it is _compositional_ --
the structure of proofs exactly follows the structure of programs.
This suggests that we can record the essential ideas of a proof
informally (leaving out some low-level calculational details) by
decorating programs with appropriate assertions around each
statement. Such a _decorated program_ carries with it
an (informal) proof of its own correctness. *)
(** For example, here is a complete decorated program: *)
(**
{{ True }} ->>
{{ m = m }}
X ::= m;
{{ X = m }} ->>
{{ X = m /\ p = p }}
Z ::= p;
{{ X = m /\ Z = p }} ->>
{{ Z - X = p - m }}
WHILE X <> 0 DO
{{ Z - X = p - m /\ X <> 0 }} ->>
{{ (Z - 1) - (X - 1) = p - m }}
Z ::= Z - 1;
{{ Z - (X - 1) = p - m }}
X ::= X - 1
{{ Z - X = p - m }}
END;
{{ Z - X = p - m /\ ~ (X <> 0) }} ->>
{{ Z = p - m }} ->>
*)
(** Concretely, a decorated program consists of the program text
interleaved with assertions. To check that a decorated program
represents a valid proof, we check that each individual command is
_locally consistent_ with its accompanying assertions in the
following sense: *)
(**
- [SKIP] is locally consistent if its precondition and
postcondition are the same:
{{ P }}
SKIP
{{ P }}
- The sequential composition of [c1] and [c2] is locally
consistent (with respect to assertions [P] and [R]) if [c1] is
locally consistent (with respect to [P] and [Q]) and [c2] is
locally consistent (with respect to [Q] and [R]):
{{ P }}
c1;
{{ Q }}
c2
{{ R }}
- An assignment is locally consistent if its precondition is
the appropriate substitution of its postcondition:
{{ P [X |-> a] }}
X ::= a
{{ P }}
- A conditional is locally consistent (with respect to assertions
[P] and [Q]) if the assertions at the top of its "then" and
"else" branches are exactly [P /\ b] and [P /\ ~b] and if its "then"
branch is locally consistent (with respect to [P /\ b] and [Q])
and its "else" branch is locally consistent (with respect to
[P /\ ~b] and [Q]):
{{ P }}
IFB b THEN
{{ P /\ b }}
c1
{{ Q }}
ELSE
{{ P /\ ~b }}
c2
{{ Q }}
FI
{{ Q }}
- A while loop with precondition [P] is locally consistent if its
postcondition is [P /\ ~b] and if the pre- and postconditions of
its body are exactly [P /\ b] and [P]:
{{ P }}
WHILE b DO
{{ P /\ b }}
c1
{{ P }}
END
{{ P /\ ~b }}
- A pair of assertions separated by [->>] is locally consistent if
the first implies the second (in all states):
{{ P }} ->>
{{ P' }}
This corresponds to the application of [hoare_consequence] and
is the only place in a decorated program where checking if
decorations are correct is not fully mechanical and syntactic,
but involves logical and/or arithmetic reasoning.
*)
(** We have seen above how _verifying_ the correctness of a
given proof involves checking that every single command is locally
consistent with the accompanying assertions. If we are instead
interested in _finding_ a proof for a given specification we need
to discover the right assertions. This can be done in an almost
automatic way, with the exception of finding loop invariants,
which is the subject of in the next section. In the reminder of
this section we explain in detail how to construct decorations for
several simple programs that don't involve non-trivial loop
invariants. *)
(* ####################################################### *)
(** ** Example: Swapping Using Addition and Subtraction *)
(** Here is a program that swaps the values of two variables using
addition and subtraction (instead of by assigning to a temporary
variable).
X ::= X + Y;
Y ::= X - Y;
X ::= X - Y
We can prove using decorations that this program is correct --
i.e., it always swaps the values of variables [X] and [Y]. *)
(**
(1) {{ X = m /\ Y = n }} ->>
(2) {{ (X + Y) - ((X + Y) - Y) = n /\ (X + Y) - Y = m }}
X ::= X + Y;
(3) {{ X - (X - Y) = n /\ X - Y = m }}
Y ::= X - Y;
(4) {{ X - Y = n /\ Y = m }}
X ::= X - Y
(5) {{ X = n /\ Y = m }}
The decorations were constructed as follows:
- We begin with the undecorated program (the unnumbered lines).
- We then add the specification -- i.e., the outer
precondition (1) and postcondition (5). In the precondition we
use auxiliary variables (parameters) [m] and [n] to remember
the initial values of variables [X] and respectively [Y], so
that we can refer to them in the postcondition (5).
- We work backwards mechanically starting from (5) all the way
to (2). At each step, we obtain the precondition of the
assignment from its postcondition by substituting the assigned
variable with the right-hand-side of the assignment. For
instance, we obtain (4) by substituting [X] with [X - Y]
in (5), and (3) by substituting [Y] with [X - Y] in (4).
- Finally, we verify that (1) logically implies (2) -- i.e.,
that the step from (1) to (2) is a valid use of the law of
consequence. For this we substitute [X] by [m] and [Y] by [n]
and calculate as follows:
(m + n) - ((m + n) - n) = n /\ (m + n) - n = m
(m + n) - m = n /\ m = m
n = n /\ m = m
(Note that, since we are working with natural numbers, not
fixed-size machine integers, we don't need to worry about the
possibility of arithmetic overflow anywhere in this argument.)
*)
(* ####################################################### *)
(** ** Example: Simple Conditionals *)
(** Here is a simple decorated program using conditionals:
(1) {{True}}
IFB X <= Y THEN
(2) {{True /\ X <= Y}} ->>
(3) {{(Y - X) + X = Y \/ (Y - X) + Y = X}}
Z ::= Y - X
(4) {{Z + X = Y \/ Z + Y = X}}
ELSE
(5) {{True /\ ~(X <= Y) }} ->>
(6) {{(X - Y) + X = Y \/ (X - Y) + Y = X}}
Z ::= X - Y
(7) {{Z + X = Y \/ Z + Y = X}}
FI
(8) {{Z + X = Y \/ Z + Y = X}}
These decorations were constructed as follows:
- We start with the outer precondition (1) and postcondition (8).
- We follow the format dictated by the [hoare_if] rule and copy the
postcondition (8) to (4) and (7). We conjoin the precondition (1)
with the guard of the conditional to obtain (2). We conjoin (1)
with the negated guard of the conditional to obtain (5).
- In order to use the assignment rule and obtain (3), we substitute
[Z] by [Y - X] in (4). To obtain (6) we substitute [Z] by [X - Y]
in (7).
- Finally, we verify that (2) implies (3) and (5) implies (6). Both
of these implications crucially depend on the ordering of [X] and
[Y] obtained from the guard. For instance, knowing that [X <= Y]
ensures that subtracting [X] from [Y] and then adding back [X]
produces [Y], as required by the first disjunct of (3). Similarly,
knowing that [~(X <= Y)] ensures that subtracting [Y] from [X] and
then adding back [Y] produces [X], as needed by the second
disjunct of (6). Note that [n - m + m = n] does _not_ hold for
arbitrary natural numbers [n] and [m] (for example, [3 - 5 + 5 =
5]). *)
(** **** Exercise: 2 stars (if_minus_plus_reloaded) *)
(** Fill in valid decorations for the following program: *)
(* ####################################################### *)
(** ** Example: Reduce to Zero (Trivial Loop) *)
(** Here is a [WHILE] loop that is so simple it needs no
invariant (i.e., the invariant [True] will do the job).
(1) {{ True }}
WHILE X <> 0 DO
(2) {{ True /\ X <> 0 }} ->>
(3) {{ True }}
X ::= X - 1
(4) {{ True }}
END
(5) {{ True /\ X = 0 }} ->>
(6) {{ X = 0 }}
The decorations can be constructed as follows:
- Start with the outer precondition (1) and postcondition (6).
- Following the format dictated by the [hoare_while] rule, we copy
(1) to (4). We conjoin (1) with the guard to obtain (2) and with
the negation of the guard to obtain (5). Note that, because the
outer postcondition (6) does not syntactically match (5), we need a
trivial use of the consequence rule from (5) to (6).
- Assertion (3) is the same as (4), because [X] does not appear in
[4], so the substitution in the assignment rule is trivial.
- Finally, the implication between (2) and (3) is also trivial.
*)
(** From this informal proof, it is easy to read off a formal proof
using the Coq versions of the Hoare rules. Note that we do _not_
unfold the definition of [hoare_triple] anywhere in this proof --
the idea is to use the Hoare rules as a "self-contained" logic for
reasoning about programs. *)
Definition reduce_to_zero' : com :=
WHILE BNot (BEq (AId X) (ANum 0)) DO
X ::= AMinus (AId X) (ANum 1)
END.
Theorem reduce_to_zero_correct' :
{{fun st => True}}
reduce_to_zero'
{{fun st => st X = 0}}.
Proof.
unfold reduce_to_zero'.
(* First we need to transform the postcondition so
that hoare_while will apply. *)
eapply hoare_consequence_post.
apply hoare_while.
Case "Loop body preserves invariant".
(* Need to massage precondition before [hoare_asgn] applies *)
eapply hoare_consequence_pre. apply hoare_asgn.
(* Proving trivial implication (2) ->> (3) *)
intros st [HT Hbp]. unfold assn_sub. apply I.
Case "Invariant and negated guard imply postcondition".
intros st [Inv GuardFalse].
unfold bassn in GuardFalse. simpl in GuardFalse.
(* SearchAbout helps to find the right lemmas *)
SearchAbout [not true].
rewrite not_true_iff_false in GuardFalse.
SearchAbout [negb false].
rewrite negb_false_iff in GuardFalse.
SearchAbout [beq_nat true].
apply beq_nat_true in GuardFalse.
apply GuardFalse. Qed.
(* ####################################################### *)
(** ** Example: Division *)
(** The following Imp program calculates the integer division and
remainder of two numbers [m] and [n] that are arbitrary constants
in the program.
X ::= m;
Y ::= 0;
WHILE n <= X DO
X ::= X - n;
Y ::= Y + 1
END;
In other words, if we replace [m] and [n] by concrete numbers and
execute the program, it will terminate with the variable [X] set
to the remainder when [m] is divided by [n] and [Y] set to the
quotient. *)
(** In order to give a specification to this program we need to
remember that dividing [m] by [n] produces a reminder [X] and a
quotient [Y] so that [n * Y + X = m /\ X > n].
It turns out that we get lucky with this program and don't have to
think very hard about the loop invariant: the invariant is the
just first conjunct [n * Y + X = m], so we use that to decorate
the program.
(1) {{ True }} ->>
(2) {{ n * 0 + m = m }}
X ::= m;
(3) {{ n * 0 + X = m }}
Y ::= 0;
(4) {{ n * Y + X = m }}
WHILE n <= X DO
(5) {{ n * Y + X = m /\ n <= X }} ->>
(6) {{ n * (Y + 1) + (X - n) = m }}
X ::= X - n;
(7) {{ n * (Y + 1) + X = m }}
Y ::= Y + 1
(8) {{ n * Y + X = m }}
END
(9) {{ n * Y + X = m /\ X < n }}
Assertions (4), (5), (8), and (9) are derived mechanically from
the invariant and the loop's guard. Assertions (8), (7), and (6)
are derived using the assignment rule going backwards from (8) to
(6). Assertions (4), (3), and (2) are again backwards applications
of the assignment rule.
Now that we've decorated the program it only remains to check that
the two uses of the consequence rule are correct -- i.e., that (1)
implies (2) and that (5) implies (6). This is indeed the case, so
we have a valid decorated program.
*)
(* ####################################################### *)
(** * Finding Loop Invariants *)
(** Once the outermost precondition and postcondition are chosen, the
only creative part in verifying programs with Hoare Logic is
finding the right loop invariants. The reason this is difficult
is the same as the reason that doing inductive mathematical proofs
requires creativity: strengthening the loop invariant (or the
induction hypothesis) means that you have a stronger assumption to
work with when trying to establish the postcondition of the loop
body (complete the induction step of the proof), but it also means
that the loop body postcondition itself is harder to prove!
This section is dedicated to teaching you how to approach the
challenge of finding loop invariants using a series of examples
and exercises. *)
(** ** Example: Slow Subtraction *)
(** The following program subtracts the value of [X] from the value of
[Y] by repeatedly decrementing both [X] and [Y]. We want to verify its
correctness with respect to the following specification:
{{ X = m /\ Y = n }}
WHILE X <> 0 DO
Y ::= Y - 1;
X ::= X - 1
END
{{ Y = n - m }}
To verify this program we need to find an invariant [I] for the
loop. As a first step we can leave [I] as an unknown and build a
_skeleton_ for the proof by applying backward the rules for local
consistency. This process leads to the following skeleton:
(1) {{ X = m /\ Y = n }} ->> (a)
(2) {{ I }}
WHILE X <> 0 DO
(3) {{ I /\ X <> 0 }} ->> (c)
(4) {{ I[X |-> X-1][Y |-> Y-1] }}
Y ::= Y - 1;
(5) {{ I[X |-> X-1] }}
X ::= X - 1
(6) {{ I }}
END
(7) {{ I /\ ~(X <> 0) }} ->> (b)
(8) {{ Y = n - m }}
By examining this skeleton, we can see that any valid [I] will
have to respect three conditions:
- (a) it must be weak enough to be implied by the loop's
precondition, i.e. (1) must imply (2);
- (b) it must be strong enough to imply the loop's postcondition,
i.e. (7) must imply (8);
- (c) it must be preserved by one iteration of the loop, i.e. (3)
must imply (4). *)
(** These conditions are actually independent of the particular
program and specification we are considering. Indeed, every loop
invariant has to satisfy them. One way to find an invariant that
simultaneously satisfies these three conditions is by using an
iterative process: start with a "candidate" invariant (e.g. a
guess or a heuristic choice) and check the three conditions above;
if any of the checks fails, try to use the information that we get
from the failure to produce another (hopefully better) candidate
invariant, and repeat the process.
For instance, in the reduce-to-zero example above, we saw that,
for a very simple loop, choosing [True] as an invariant did the
job. So let's try it again here! I.e., let's instantiate [I] with
[True] in the skeleton above see what we get...
(1) {{ X = m /\ Y = n }} ->> (a - OK)
(2) {{ True }}
WHILE X <> 0 DO
(3) {{ True /\ X <> 0 }} ->> (c - OK)
(4) {{ True }}
Y ::= Y - 1;
(5) {{ True }}
X ::= X - 1
(6) {{ True }}
END
(7) {{ True /\ X = 0 }} ->> (b - WRONG!)
(8) {{ Y = n - m }}
While conditions (a) and (c) are trivially satisfied,
condition (b) is wrong, i.e. it is not the case that (7) [True /\
X = 0] implies (8) [Y = n - m]. In fact, the two assertions are
completely unrelated and it is easy to find a counterexample (say,
[Y = X = m = 0] and [n = 1]).
If we want (b) to hold, we need to strengthen the invariant so
that it implies the postcondition (8). One very simple way to do
this is to let the invariant _be_ the postcondition. So let's
return to our skeleton, instantiate [I] with [Y = n - m], and
check conditions (a) to (c) again.
(1) {{ X = m /\ Y = n }} ->> (a - WRONG!)
(2) {{ Y = n - m }}
WHILE X <> 0 DO
(3) {{ Y = n - m /\ X <> 0 }} ->> (c - WRONG!)
(4) {{ Y - 1 = n - m }}
Y ::= Y - 1;
(5) {{ Y = n - m }}
X ::= X - 1
(6) {{ Y = n - m }}
END
(7) {{ Y = n - m /\ X = 0 }} ->> (b - OK)
(8) {{ Y = n - m }}
This time, condition (b) holds trivially, but (a) and (c) are
broken. Condition (a) requires that (1) [X = m /\ Y = n]
implies (2) [Y = n - m]. If we substitute [X] by [m] we have to
show that [m = n - m] for arbitrary [m] and [n], which does not
hold (for instance, when [m = n = 1]). Condition (c) requires that
[n - m - 1 = n - m], which fails, for instance, for [n = 1] and [m =
0]. So, although [Y = n - m] holds at the end of the loop, it does
not hold from the start, and it doesn't hold on each iteration;
it is not a correct invariant.
This failure is not very surprising: the variable [Y] changes
during the loop, while [m] and [n] are constant, so the assertion
we chose didn't have much chance of being an invariant!
To do better, we need to generalize (8) to some statement that is
equivalent to (8) when [X] is [0], since this will be the case
when the loop terminates, and that "fills the gap" in some
appropriate way when [X] is nonzero. Looking at how the loop
works, we can observe that [X] and [Y] are decremented together
until [X] reaches [0]. So, if [X = 2] and [Y = 5] initially,
after one iteration of the loop we obtain [X = 1] and [Y = 4];
after two iterations [X = 0] and [Y = 3]; and then the loop stops.
Notice that the difference between [Y] and [X] stays constant
between iterations; initially, [Y = n] and [X = m], so this
difference is always [n - m]. So let's try instantiating [I] in
the skeleton above with [Y - X = n - m].
(1) {{ X = m /\ Y = n }} ->> (a - OK)
(2) {{ Y - X = n - m }}
WHILE X <> 0 DO
(3) {{ Y - X = n - m /\ X <> 0 }} ->> (c - OK)
(4) {{ (Y - 1) - (X - 1) = n - m }}
Y ::= Y - 1;
(5) {{ Y - (X - 1) = n - m }}
X ::= X - 1
(6) {{ Y - X = n - m }}
END
(7) {{ Y - X = n - m /\ X = 0 }} ->> (b - OK)
(8) {{ Y = n - m }}
Success! Conditions (a), (b) and (c) all hold now. (To
verify (c), we need to check that, under the assumption that [X <>
0], we have [Y - X = (Y - 1) - (X - 1)]; this holds for all
natural numbers [X] and [Y].) *)
(* ####################################################### *)
(** ** Exercise: Slow Assignment *)
(** **** Exercise: 2 stars (slow_assignment) *)
(** A roundabout way of assigning a number currently stored in [X] to
the variable [Y] is to start [Y] at [0], then decrement [X] until
it hits [0], incrementing [Y] at each step. Here is a program that
implements this idea:
{{ X = m }}
Y ::= 0;
WHILE X <> 0 DO
X ::= X - 1;
Y ::= Y + 1;
END
{{ Y = m }}
Write an informal decorated program showing that this is correct. *)
(* FILL IN HERE *)
(** [] *)
(* ####################################################### *)
(** ** Exercise: Slow Addition *)
(** **** Exercise: 3 stars, optional (add_slowly_decoration) *)
(** The following program adds the variable X into the variable Z
by repeatedly decrementing X and incrementing Z.
WHILE X <> 0 DO
Z ::= Z + 1;
X ::= X - 1
END
Following the pattern of the [subtract_slowly] example above, pick
a precondition and postcondition that give an appropriate
specification of [add_slowly]; then (informally) decorate the
program accordingly. *)
(* FILL IN HERE *)
(** [] *)
(* ####################################################### *)
(** ** Example: Parity *)
(** Here is a cute little program for computing the parity of the
value initially stored in [X] (due to Daniel Cristofani).
{{ X = m }}
WHILE 2 <= X DO
X ::= X - 2
END
{{ X = parity m }}
The mathematical [parity] function used in the specification is
defined in Coq as follows: *)
Fixpoint parity x :=
match x with
| 0 => 0
| 1 => 1
| S (S x') => parity x'
end.
(** The postcondition does not hold at the beginning of the loop,
since [m = parity m] does not hold for an arbitrary [m], so we
cannot use that as an invariant. To find an invariant that works,
let's think a bit about what this loop does. On each iteration it
decrements [X] by [2], which preserves the parity of [X]. So the
parity of [X] does not change, i.e. it is invariant. The initial
value of [X] is [m], so the parity of [X] is always equal to the
parity of [m]. Using [parity X = parity m] as an invariant we
obtain the following decorated program:
{{ X = m }} ->> (a - OK)
{{ parity X = parity m }}
WHILE 2 <= X DO
{{ parity X = parity m /\ 2 <= X }} ->> (c - OK)
{{ parity (X-2) = parity m }}
X ::= X - 2
{{ parity X = parity m }}
END
{{ parity X = parity m /\ X < 2 }} ->> (b - OK)
{{ X = parity m }}
With this invariant, conditions (a), (b), and (c) are all
satisfied. For verifying (b), we observe that, when [X < 2], we
have [parity X = X] (we can easily see this in the definition of
[parity]). For verifying (c), we observe that, when [2 <= X], we
have [parity X = parity (X-2)]. *)
(** **** Exercise: 3 stars, optional (parity_formal) *)
(** Translate this proof to Coq. Refer to the reduce-to-zero example
for ideas. You may find the following two lemmas useful: *)
Lemma parity_ge_2 : forall x,
2 <= x ->
parity (x - 2) = parity x.
Proof.
induction x; intro. reflexivity.
destruct x. inversion H. inversion H1.
simpl. rewrite <- minus_n_O. reflexivity.
Qed.
Lemma parity_lt_2 : forall x,
~ 2 <= x ->
parity (x) = x.
Proof.
intros. induction x. reflexivity. destruct x. reflexivity.
apply ex_falso_quodlibet. apply H. omega.
Qed.
Theorem parity_correct : forall m,
{{ fun st => st X = m }}
WHILE BLe (ANum 2) (AId X) DO
X ::= AMinus (AId X) (ANum 2)
END
{{ fun st => st X = parity m }}.
Proof.
intros m. eapply hoare_consequence.
apply (hoare_while (fun st => parity (st X) = parity m)). eapply hoare_consequence_pre.
apply hoare_asgn.
unfold assert_implies, assn_sub, bassn, update. simpl. intros. inversion H.
rewrite <- H0. apply parity_ge_2. apply ble_nat_true. assumption.
unfold assert_implies. intros. subst. reflexivity.
unfold assert_implies, bassn. simpl. intros. inversion H. apply not_true_is_false in H1.
rewrite <- H0. symmetry. apply parity_lt_2. apply ble_nat_false. assumption.
Qed.
(** [] *)
(* ####################################################### *)
(** ** Example: Finding Square Roots *)
(** The following program computes the square root of [X]
by naive iteration:
{{ X=m }}
Z ::= 0;
WHILE (Z+1)*(Z+1) <= X DO
Z ::= Z+1
END
{{ Z*Z<=m /\ m<(Z+1)*(Z+1) }}
*)
(** As above, we can try to use the postcondition as a candidate
invariant, obtaining the following decorated program:
(1) {{ X=m }} ->> (a - second conjunct of (2) WRONG!)
(2) {{ 0*0 <= m /\ m<1*1 }}
Z ::= 0;
(3) {{ Z*Z <= m /\ m<(Z+1)*(Z+1) }}
WHILE (Z+1)*(Z+1) <= X DO
(4) {{ Z*Z<=m /\ (Z+1)*(Z+1)<=X }} ->> (c - WRONG!)
(5) {{ (Z+1)*(Z+1)<=m /\ m<(Z+2)*(Z+2) }}
Z ::= Z+1
(6) {{ Z*Z<=m /\ m<(Z+1)*(Z+1) }}
END
(7) {{ Z*Z<=m /\ m<(Z+1)*(Z+1) /\ X<(Z+1)*(Z+1) }} ->> (b - OK)
(8) {{ Z*Z<=m /\ m<(Z+1)*(Z+1) }}
This didn't work very well: both conditions (a) and (c) failed.
Looking at condition (c), we see that the second conjunct of (4)
is almost the same as the first conjunct of (5), except that (4)
mentions [X] while (5) mentions [m]. But note that [X] is never
assigned in this program, so we should have [X=m], but we didn't
propagate this information from (1) into the loop invariant.
Also, looking at the second conjunct of (8), it seems quite
hopeless as an invariant -- and we don't even need it, since we
can obtain it from the negation of the guard (third conjunct
in (7)), again under the assumption that [X=m].
So we now try [X=m /\ Z*Z <= m] as the loop invariant:
{{ X=m }} ->> (a - OK)
{{ X=m /\ 0*0 <= m }}
Z ::= 0;
{{ X=m /\ Z*Z <= m }}
WHILE (Z+1)*(Z+1) <= X DO
{{ X=m /\ Z*Z<=m /\ (Z+1)*(Z+1)<=X }} ->> (c - OK)
{{ X=m /\ (Z+1)*(Z+1)<=m }}
Z ::= Z+1
{{ X=m /\ Z*Z<=m }}
END
{{ X=m /\ Z*Z<=m /\ X<(Z+1)*(Z+1) }} ->> (b - OK)
{{ Z*Z<=m /\ m<(Z+1)*(Z+1) }}
This works, since conditions (a), (b), and (c) are now all
trivially satisfied.
Very often, if a variable is used in a loop in a read-only
fashion (i.e., it is referred to by the program or by the
specification and it is not changed by the loop) it is necessary
to add the fact that it doesn't change to the loop invariant. *)
(* ####################################################### *)
(** ** Example: Squaring *)
(** Here is a program that squares [X] by repeated addition:
{{ X = m }}
Y ::= 0;
Z ::= 0;
WHILE Y <> X DO
Z ::= Z + X;
Y ::= Y + 1
END
{{ Z = m*m }}
*)
(** The first thing to note is that the loop reads [X] but doesn't
change its value. As we saw in the previous example, in such cases
it is a good idea to add [X = m] to the invariant. The other thing
we often use in the invariant is the postcondition, so let's add
that too, leading to the invariant candidate [Z = m * m /\ X = m].
{{ X = m }} ->> (a - WRONG)
{{ 0 = m*m /\ X = m }}
Y ::= 0;
{{ 0 = m*m /\ X = m }}
Z ::= 0;
{{ Z = m*m /\ X = m }}
WHILE Y <> X DO
{{ Z = Y*m /\ X = m /\ Y <> X }} ->> (c - WRONG)
{{ Z+X = m*m /\ X = m }}
Z ::= Z + X;
{{ Z = m*m /\ X = m }}
Y ::= Y + 1
{{ Z = m*m /\ X = m }}
END
{{ Z = m*m /\ X = m /\ Y = X }} ->> (b - OK)
{{ Z = m*m }}
Conditions (a) and (c) fail because of the [Z = m*m] part. While
[Z] starts at [0] and works itself up to [m*m], we can't expect
[Z] to be [m*m] from the start. If we look at how [Z] progesses
in the loop, after the 1st iteration [Z = m], after the 2nd
iteration [Z = 2*m], and at the end [Z = m*m]. Since the variable
[Y] tracks how many times we go through the loop, we derive the
new invariant candidate [Z = Y*m /\ X = m].
{{ X = m }} ->> (a - OK)
{{ 0 = 0*m /\ X = m }}
Y ::= 0;
{{ 0 = Y*m /\ X = m }}
Z ::= 0;
{{ Z = Y*m /\ X = m }}
WHILE Y <> X DO
{{ Z = Y*m /\ X = m /\ Y <> X }} ->> (c - OK)
{{ Z+X = (Y+1)*m /\ X = m }}
Z ::= Z + X;
{{ Z = (Y+1)*m /\ X = m }}
Y ::= Y + 1
{{ Z = Y*m /\ X = m }}
END
{{ Z = Y*m /\ X = m /\ Y = X }} ->> (b - OK)
{{ Z = m*m }}
This new invariant makes the proof go through: all three
conditions are easy to check.
It is worth comparing the postcondition [Z = m*m] and the [Z =
Y*m] conjunct of the invariant. It is often the case that one has
to replace auxiliary variabes (parameters) with variables -- or
with expressions involving both variables and parameters (like
[m - Y]) -- when going from postconditions to invariants. *)
(* ####################################################### *)
(** ** Exercise: Factorial *)
(** **** Exercise: 3 stars (factorial) *)
(** Recall that [n!] denotes the factorial of [n] (i.e. [n! =
1*2*...*n]). Here is an Imp program that calculates the factorial
of the number initially stored in the variable [X] and puts it in
the variable [Y]:
{{ X = m }} ;
Y ::= 1
WHILE X <> 0
DO
Y ::= Y * X
X ::= X - 1
END
{{ Y = m! }}
Fill in the blanks in following decorated program:
{{ X = m }} ->>
{{ }}
Y ::= 1;
{{ }}
WHILE X <> 0
DO {{ }} ->>
{{ }}
Y ::= Y * X;
{{ }}
X ::= X - 1
{{ }}
END
{{ }} ->>
{{ Y = m! }}
*)
(** [] *)
(* ####################################################### *)
(** ** Exercise: Min *)
(** **** Exercise: 3 stars (Min_Hoare) *)
(** Fill in valid decorations for the following program.
For the => steps in your annotations, you may rely (silently) on the
following facts about min
Lemma lemma1 : forall x y,
(x=0 \/ y=0) -> min x y = 0.
Lemma lemma2 : forall x y,
min (x-1) (y-1) = (min x y) - 1.
plus, as usual, standard high-school algebra.
{{ True }} ->>
{{ }}
X ::= a;
{{ }}
Y ::= b;
{{ }}
Z ::= 0;
{{ }}
WHILE (X <> 0 /\ Y <> 0) DO
{{ }} ->>
{{ }}
X := X - 1;
{{ }}
Y := Y - 1;
{{ }}
Z := Z + 1;
{{ }}
END
{{ }} ->>
{{ Z = min a b }}
*)
(** **** Exercise: 3 stars (two_loops) *)
(** Here is a very inefficient way of adding 3 numbers:
X ::= 0;
Y ::= 0;
Z ::= c;
WHILE X <> a DO
X ::= X + 1;
Z ::= Z + 1
END;
WHILE Y <> b DO
Y ::= Y + 1;
Z ::= Z + 1
END
Show that it does what it should by filling in the blanks in the
following decorated program.
{{ True }} ->>
{{ }}
X ::= 0;
{{ }}
Y ::= 0;
{{ }}
Z ::= c;
{{ }}
WHILE X <> a DO
{{ }} ->>
{{ }}
X ::= X + 1;
{{ }}
Z ::= Z + 1
{{ }}
END;
{{ }} ->>
{{ }}
WHILE Y <> b DO
{{ }} ->>
{{ }}
Y ::= Y + 1;
{{ }}
Z ::= Z + 1
{{ }}
END
{{ }} ->>
{{ Z = a + b + c }}
*)
(* ####################################################### *)
(** ** Exercise: Power Series *)
(** **** Exercise: 4 stars, optional (dpow2_down) *)
(** Here is a program that computes the series:
[1 + 2 + 2^2 + ... + 2^m = 2^(m+1) - 1]
X ::= 0;
Y ::= 1;
Z ::= 1;
WHILE X <> m DO
Z ::= 2 * Z;
Y ::= Y + Z;
X ::= X + 1;
END
Write a decorated program for this. *)
(* FILL IN HERE *)
(* ####################################################### *)
(** * Weakest Preconditions (Advanced) *)
(** Some Hoare triples are more interesting than others.
For example,
{{ False }} X ::= Y + 1 {{ X <= 5 }}
is _not_ very interesting: although it is perfectly valid, it
tells us nothing useful. Since the precondition isn't satisfied
by any state, it doesn't describe any situations where we can use
the command [X ::= Y + 1] to achieve the postcondition [X <= 5].
By contrast,
{{ Y <= 4 /\ Z = 0 }} X ::= Y + 1 {{ X <= 5 }}
is useful: it tells us that, if we can somehow create a situation
in which we know that [Y <= 4 /\ Z = 0], then running this command
will produce a state satisfying the postcondition. However, this
triple is still not as useful as it could be, because the [Z = 0]
clause in the precondition actually has nothing to do with the
postcondition [X <= 5]. The _most_ useful triple (for a given
command and postcondition) is this one:
{{ Y <= 4 }} X ::= Y + 1 {{ X <= 5 }}
In other words, [Y <= 4] is the _weakest_ valid precondition of
the command [X ::= Y + 1] for the postcondition [X <= 5]. *)
(** In general, we say that "[P] is the weakest precondition of
command [c] for postcondition [Q]" if [{{P}} c {{Q}}] and if,
whenever [P'] is an assertion such that [{{P'}} c {{Q}}], we have
[P' st] implies [P st] for all states [st]. *)
Definition is_wp P c Q :=
{{P}} c {{Q}} /\
forall P', {{P'}} c {{Q}} -> (P' ->> P).
(** That is, [P] is the weakest precondition of [c] for [Q]
if (a) [P] _is_ a precondition for [Q] and [c], and (b) [P] is the
_weakest_ (easiest to satisfy) assertion that guarantees [Q] after
executing [c]. *)
(** **** Exercise: 1 star, optional (wp) *)
(** What are the weakest preconditions of the following commands
for the following postconditions?
1) {{ ? }} SKIP {{ X = 5 }}
2) {{ ? }} X ::= Y + Z {{ X = 5 }}
3) {{ ? }} X ::= Y {{ X = Y }}
4) {{ ? }}
IFB X == 0 THEN Y ::= Z + 1 ELSE Y ::= W + 2 FI
{{ Y = 5 }}
5) {{ ? }}
X ::= 5
{{ X = 0 }}
6) {{ ? }}
WHILE True DO X ::= 0 END
{{ X = 0 }}
*)
(* FILL IN HERE *)
(** [] *)
(** **** Exercise: 3 stars, advanced, optional (is_wp_formal) *)
(** Prove formally using the definition of [hoare_triple] that [Y <= 4]
is indeed the weakest precondition of [X ::= Y + 1] with respect to
postcondition [X <= 5]. *)
Theorem is_wp_example :
is_wp
(fun st => st Y <= 4)
(X ::= APlus (AId Y) (ANum 1))
(fun st => st X <= 5).
Proof.
unfold is_wp. split.
eapply hoare_consequence_pre.
apply hoare_asgn.
unfold assert_implies, assn_sub, update. simpl. intros. omega.
unfold hoare_triple, assert_implies. intros. apply (H _ (update st X (st Y + 1))) in H0.
unfold update in H0. simpl in H0. omega.
constructor. reflexivity.
Qed.
(** [] *)
(** **** Exercise: 2 stars, advanced (hoare_asgn_weakest) *)
(** Show that the precondition in the rule [hoare_asgn] is in fact the
weakest precondition. *)
Theorem hoare_asgn_weakest : forall Q X a,
is_wp (Q [X |-> a]) (X ::= a) Q.
Proof.
unfold is_wp. split.
apply hoare_asgn.
unfold hoare_triple, assert_implies, assn_sub, update. intros. eapply H.
econstructor. reflexivity.
assumption.
Qed.
(** [] *)
(** **** Exercise: 2 stars, advanced, optional (hoare_havoc_weakest) *)
(** Show that your [havoc_pre] rule from the [himp_hoare] exercise
in the [Hoare] chapter returns the weakest precondition. *)
Module Himp2.
Import Himp.
Lemma hoare_havoc_weakest : forall (P Q : Assertion) (X : id),
{{ P }} HAVOC X {{ Q }} ->
P ->> havoc_pre X Q.
Proof.
unfold hoare_triple, assert_implies, havoc_pre. intros. eapply H.
econstructor.
assumption.
Qed.
End Himp2.
(** [] *)
(* ####################################################### *)
(** * Formal Decorated Programs (Advanced) *)
(** The informal conventions for decorated programs amount to a way of
displaying Hoare triples in which commands are annotated with
enough embedded assertions that checking the validity of the
triple is reduced to simple logical and algebraic calculations
showing that some assertions imply others. In this section, we
show that this informal presentation style can actually be made
completely formal and indeed that checking the validity of
decorated programs can mostly be automated. *)
(** ** Syntax *)
(** The first thing we need to do is to formalize a variant of the
syntax of commands with embedded assertions. We call the new
commands _decorated commands_, or [dcom]s. *)
Inductive dcom : Type :=
| DCSkip : Assertion -> dcom
| DCSeq : dcom -> dcom -> dcom
| DCAsgn : id -> aexp -> Assertion -> dcom
| DCIf : bexp -> Assertion -> dcom -> dcom -> Assertion -> dcom
| DCWhile : bexp -> Assertion -> dcom -> Assertion -> dcom
| DCPre : Assertion -> dcom -> dcom
| DCPost : dcom -> Assertion -> dcom.
Tactic Notation "dcom_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "Skip" | Case_aux c "Seq" | Case_aux c "Asgn"
| Case_aux c "If" | Case_aux c "While"
| Case_aux c "Pre" | Case_aux c "Post" ].
Fixpoint post (d:dcom) : Assertion :=
match d with
| DCSkip P => P
| DCSeq d1 d2 => post d2
| DCAsgn X a Q => Q
| DCIf _ _ d1 d2 Q => Q
| DCWhile b Pbody c Ppost => Ppost
| DCPre _ d => post d
| DCPost c Q => Q
end.
Fixpoint move_back (d : dcom) : Assertion :=
match d with
| DCSkip P => P
| DCSeq d1 _ => move_back d1
| DCAsgn X a P => assn_sub X a P
| DCIf _ P _ _ _ => P (*it was (fun st => P1 st \/ P2 st)*)
| DCWhile _ _ d _ => post d
| DCPre P _ => P
| DCPost d _ => move_back d
end.
Notation "'SKIP' {{ P }}"
:= (DCSkip P)
(at level 10) : dcom_scope.
Notation "l '::=' a {{ P }}"
:= (DCAsgn l a P)
(at level 60, a at next level) : dcom_scope.
Notation "'WHILE' b 'DO' d 'END'"
:= (DCWhile b (fun st => post d st /\ bassn b st) d
(fun st => post d st /\ ~ bassn b st))
(at level 80, right associativity) : dcom_scope.
Notation "'IFB' b 'THEN' d 'ELSE' d' 'FI' "
:= (DCIf b (fun st => move_back d st \/ move_back d' st) d d'
(fun st => post d st /\ post d' st))
(at level 80, right associativity) : dcom_scope.
Notation "'->>' {{ P }} d"
:= (DCPre P d)
(at level 90, right associativity) : dcom_scope.
Notation "{{ P }} d"
:= (DCPre P d)
(at level 90) : dcom_scope.
Notation "d '->>' {{ P }}"
:= (DCPost d P)
(at level 80, right associativity) : dcom_scope.
Notation " d ;; d' "
:= (DCSeq d d')
(at level 80, right associativity) : dcom_scope.
Delimit Scope dcom_scope with dcom.
Definition nocond (_ : state) := True.
Fixpoint unextract (P : Assertion) (c : com) : dcom :=
match c with
| SKIP => SKIP {{ P }} % dcom
| c1 ;; c2 =>
match unextract P c2 with
| d2 => (unextract (move_back d2) c1 ;; d2)
end % dcom
| X ::= a => X ::= a {{ P }} % dcom
| IFB b THEN c1 ELSE c2 FI =>
match (unextract P c1, unextract P c2) with
| (d1, d2) => DCIf b (fun st => move_back d1 st \/ move_back d2 st) d1 d2 P
end
(*IFB b THEN (unextract P c1) ELSE (unextract P c2) FI % dcom*)
| WHILE b DO c END => DCWhile b (fun st => bassn b st) (unextract nocond c)
(fun st => ~bassn b st /\ P st)
end.
Notation " c ;;; d "
:= (DCSeq (unextract (move_back d) c) d)
(at level 80, right associativity).
Notation " c ->>> P "
:= (c ;;; unextract P SKIP)
(at level 80, right associativity).
(** To avoid clashing with the existing [Notation] definitions
for ordinary [com]mands, we introduce these notations in a special
scope called [dcom_scope], and we wrap examples with the
declaration [% dcom] to signal that we want the notations to be
interpreted in this scope.
Careful readers will note that we've defined two notations for the
[DCPre] constructor, one with and one without a [->>]. The
"without" version is intended to be used to supply the initial
precondition at the very top of the program. *)
Example dec_while :=
WHILE (BNot (BEq (AId X) (ANum 0))) DO
X ::= (AMinus (AId X) (ANum 1))
END ->>>
(( fun st => st X = 0 )).
(** It is easy to go from a [dcom] to a [com] by erasing all
annotations. *)
Fixpoint extract (d:dcom) : com :=
match d with
| DCSkip _ => SKIP
| DCSeq d1 d2 => (extract d1 ;; extract d2)
| DCAsgn X a _ => X ::= a
| DCIf b _ d1 d2 _ => IFB b THEN extract d1 ELSE extract d2 FI
| DCWhile b _ d _ => WHILE b DO extract d END
| DCPre _ d => extract d
| DCPost d _ => extract d
end.
(** The choice of exactly where to put assertions in the definition of
[dcom] is a bit subtle. The simplest thing to do would be to
annotate every [dcom] with a precondition and postcondition. But
this would result in very verbose programs with a lot of repeated
annotations: for example, a program like [SKIP;SKIP] would have to
be annotated as
{{P}} ({{P}} SKIP {{P}}) ;; ({{P}} SKIP {{P}}) {{P}},
with pre- and post-conditions on each [SKIP], plus identical pre-
and post-conditions on the semicolon!
Instead, the rule we've followed is this:
- The _post_-condition expected by each [dcom] [d] is embedded in [d]
- The _pre_-condition is supplied by the context. *)
(** In other words, the invariant of the representation is that a
[dcom] [d] together with a precondition [P] determines a Hoare
triple [{{P}} (extract d) {{post d}}], where [post] is defined as
follows: *)
(** Similarly, we can extract the "initial precondition" from a
decorated program. *)
Fixpoint pre (d:dcom) : Assertion :=
match d with
| DCSkip P => fun st => True
| DCSeq c1 c2 => pre c1
| DCAsgn X a Q => fun st => True
| DCIf _ _ t e _ => fun st => True
| DCWhile b Pbody c Ppost => fun st => True
| DCPre P c => P
| DCPost c Q => pre c
end.
(** This function is not doing anything sophisticated like calculating
a weakest precondition; it just recursively searches for an
explicit annotation at the very beginning of the program,
returning default answers for programs that lack an explicit
precondition (like a bare assignment or [SKIP]). *)
(** Using [pre] and [post], and assuming that we adopt the convention
of always supplying an explicit precondition annotation at the
very beginning of our decorated programs, we can express what it
means for a decorated program to be correct as follows: *)
Definition dec_correct (d:dcom) :=
{{pre d}} (extract d) {{post d}}.
(** To check whether this Hoare triple is _valid_, we need a way to
extract the "proof obligations" from a decorated program. These
obligations are often called _verification conditions_, because
they are the facts that must be verified to see that the
decorations are logically consistent and thus add up to a complete
proof of correctness. *)
(** ** Extracting Verification Conditions *)
(** The function [verification_conditions] takes a [dcom] [d] together
with a precondition [P] and returns a _proposition_ that, if it
can be proved, implies that the triple [{{P}} (extract d) {{post d}}]
is valid. *)
(** It does this by walking over [d] and generating a big
conjunction including all the "local checks" that we listed when
we described the informal rules for decorated programs. (Strictly
speaking, we need to massage the informal rules a little bit to
add some uses of the rule of consequence, but the correspondence
should be clear.) *)
Fixpoint verification_conditions (P : Assertion) (d:dcom) : Prop :=
match d with
| DCSkip Q =>
(P ->> Q)
| DCSeq d1 d2 =>
verification_conditions P d1
/\ verification_conditions (post d1) d2
| DCAsgn X a Q =>
(P ->> Q [X |-> a])
| DCIf b P' d1 d2 Q =>
(P ->> P')
/\ (Q <<->> post d1) /\ (Q <<->> post d2)
/\ verification_conditions (fun st => P' st /\ bassn b st) d1
/\ verification_conditions (fun st => P' st /\ ~ bassn b st) d2
| DCWhile b Pbody d Ppost =>
(* post d is the loop invariant and the initial precondition *)
(P ->> post d)
/\ (Pbody <<->> (fun st => post d st /\ bassn b st))
/\ (Ppost <<->> (fun st => post d st /\ ~ bassn b st))
/\ verification_conditions Pbody d
| DCPre P' d =>
(P ->> P') /\ verification_conditions P' d
| DCPost d Q =>
verification_conditions P d /\ (post d ->> Q)
end.
(** And now, the key theorem, which states that
[verification_conditions] does its job correctly. Not
surprisingly, we need to use each of the Hoare Logic rules at some
point in the proof. *)
(** We have used _in_ variants of several tactics before to
apply them to values in the context rather than the goal. An
extension of this idea is the syntax [tactic in *], which applies
[tactic] in the goal and every hypothesis in the context. We most
commonly use this facility in conjunction with the [simpl] tactic,
as below. *)
Theorem verification_correct : forall d P,
verification_conditions P d -> {{P}} (extract d) {{post d}}.
Proof.
dcom_cases (induction d) Case; intros P H; simpl in *.
Case "Skip".
eapply hoare_consequence_pre.
apply hoare_skip.
assumption.
Case "Seq".
inversion H as [H1 H2]. clear H.
eapply hoare_seq.
apply IHd2. apply H2.
apply IHd1. apply H1.
Case "Asgn".
eapply hoare_consequence_pre.
apply hoare_asgn.
assumption.
Case "If".
inversion H as [HPre [[Hd11 Hd12]
[[Hd21 Hd22] [HThen HElse]]]].
clear H.
apply IHd1 in HThen. clear IHd1.
apply IHd2 in HElse. clear IHd2.
apply hoare_if; eapply hoare_consequence; try eauto;
unfold assert_implies; intuition.
Case "While".
inversion H as [Hpre [[Hbody1 Hbody2] [[Hpost1 Hpost2] Hd]]];
subst; clear H.
eapply hoare_consequence_pre; eauto.
eapply hoare_consequence_post; eauto.
apply hoare_while.
eapply hoare_consequence_pre; eauto.
Case "Pre".
inversion H as [HP Hd]; clear H.
eapply hoare_consequence_pre. apply IHd. apply Hd. assumption.
Case "Post".
inversion H as [Hd HQ]; clear H.
eapply hoare_consequence_post. apply IHd. apply Hd. assumption.
Qed.
(** ** Examples *)
(** The propositions generated by [verification_conditions] are fairly
big, and they contain many conjuncts that are essentially trivial. *)
Eval simpl in (verification_conditions (fun st => True) dec_while).
(**
==>
(((fun _ : state => True) ->> (fun _ : state => True)) /\
((fun _ : state => True) ->> (fun _ : state => True)) /\
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) =
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) /\
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) =
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) /\
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) ->>
(fun _ : state => True) [X |-> AMinus (AId X) (ANum 1)]) /\
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) ->>
(fun st : state => st X = 0)
*)
(** In principle, we could certainly work with them using just the
tactics we have so far, but we can make things much smoother with
a bit of automation. We first define a custom [verify] tactic
that applies splitting repeatedly to turn all the conjunctions
into separate subgoals and then uses [omega] and [eauto] (a handy
general-purpose automation tactic that we'll discuss in detail
later) to deal with as many of them as possible. *)
Lemma ble_nat_true_iff : forall n m : nat,
ble_nat n m = true <-> n <= m.
Proof.
intros n m. split. apply ble_nat_true.
generalize dependent m. induction n; intros m H. reflexivity.
simpl. destruct m. inversion H.
apply le_S_n in H. apply IHn. assumption.
Qed.
Lemma ble_nat_false_iff : forall n m : nat,
ble_nat n m = false <-> ~(n <= m).
Proof.
intros n m. split. apply ble_nat_false.
generalize dependent m. induction n; intros m H.
apply ex_falso_quodlibet. apply H. apply le_0_n.
simpl. destruct m. reflexivity.
apply IHn. intro Hc. apply H. apply le_n_S. assumption.
Qed.
Tactic Notation "verify" :=
intros; simpl; unfold assn_sub, update; simpl;
try apply verification_correct;
repeat split;
simpl; unfold assert_implies;
unfold bassn in *; unfold beval in *; unfold aeval in *;
unfold assn_sub; intros;
repeat rewrite update_eq;
repeat (rewrite update_neq; [| (intro X; inversion X)]);
simpl in *;
repeat match goal with [H : _ /\ _ |- _] => destruct H end;
repeat rewrite not_true_iff_false in *;
repeat rewrite not_false_iff_true in *;
repeat rewrite negb_true_iff in *;
repeat rewrite negb_false_iff in *;
repeat rewrite beq_nat_true_iff in *;
repeat rewrite beq_nat_false_iff in *;
repeat rewrite ble_nat_true_iff in *;
repeat rewrite ble_nat_false_iff in *;
repeat rewrite NPeano.Nat.add_1_r in *;
try subst; intuition;
repeat
match goal with
[st : state |- _] =>
match goal with
[H : st _ = _ |- _] => rewrite -> H in *; clear H
| [H : _ = st _ |- _] => rewrite <- H in *; clear H
end
end;
simpl; eauto; try omega.
(** What's left after [verify] does its thing is "just the interesting
parts" of checking that the decorations are correct. For very
simple examples [verify] immediately solves the goal (provided
that the annotations are correct). *)
Theorem dec_while_correct :
dec_correct dec_while.
Proof. verify. Qed.
(** Another example (formalizing a decorated program we've seen
before): *)
Example subtract_slowly_dec (m:nat) (p:nat) : dcom := (
{{ fun st => st X = m /\ st Z = p }}
WHILE BNot (BEq (AId X) (ANum 0)) DO
Z ::= AMinus (AId Z) (ANum 1) ;;;
X ::= AMinus (AId X) (ANum 1)
{{ fun st => st Z - st X = p - m }}
END ->>
{{ fun st => st Z = p - m }}
) % dcom.
Theorem subtract_slowly_dec_correct : forall m p,
dec_correct (subtract_slowly_dec m p).
Proof. unfold subtract_slowly_dec. verify. Qed.
Definition ainc x := APlus (AId x) (ANum 1).
Definition adec x := AMinus (AId x) (ANum 1).
Definition aplus x1 x2 := APlus (AId x1) (AId x2).
Definition aminus x1 x2 := AMinus (AId x1) (AId x2).
Definition bnoteqn x n := BNot (BEq (AId x) (ANum n)).
Theorem if_minus_plus_reloaded : dec_correct (
IFB BLe (AId X) (AId Y) THEN
Z ::= aminus Y X
ELSE
Y ::= aplus X Z
FI ->>>
(( fun st => st Y = st X + st Z ))
).
Proof. verify. Qed.
Theorem min_hoare : forall a b, dec_correct (
X ::= ANum a ;;;
Y ::= ANum b ;;;
Z ::= ANum 0 ;;;
WHILE BAnd (bnoteqn X 0) (bnoteqn Y 0) DO
X ::= adec X ;;;
Y ::= adec Y ;;;
Z ::= ainc Z
{{ fun st => st Z + st X = a /\ st Z + st Y = b }}
END ->>
{{ fun st => st Z = min a b }}
) % dcom.
Proof.
verify; destruct (st X); destruct (st Y);
try rewrite NPeano.Nat.add_min_distr_l; verify; inversion H0.
Qed.
Theorem two_loops : forall a b c, dec_correct (
X ::= ANum 0 ;;;
Y ::= ANum 0 ;;;
Z ::= ANum c ;;;
WHILE BNot (BEq (AId X) (ANum a)) DO
X ::= ainc X ;;;
Z ::= ainc Z
{{ fun st => st Z = st X + c /\ st Y = 0}}
END ;;
WHILE BNot (BEq (AId Y) (ANum b)) DO
Y ::= ainc Y ;;;
Z ::= ainc Z
{{ fun st => st Z = a + st Y + c }}
END ->>
{{ fun st => st Z = a + b + c }}
) % dcom.
Proof. verify. Qed.
Fixpoint power (x n : nat) : nat :=
match n with
| 0 => 1
| S n' => x * power x n'
end.
Theorem dpow2_down : forall m, dec_correct (
X ::= ANum 0 ;;;
Y ::= ANum 1 ;;;
Z ::= ANum 1 ;;;
WHILE BNot (BEq (AId X) (ANum m)) DO
Z ::= AMult (ANum 2) (AId Z) ;;;
Y ::= APlus (AId Y) (AId Z) ;;;
X ::= APlus (AId X) (ANum 1)
{{ fun st => st Y = power 2 (st X + 1) - 1 /\ st Z = power 2 (st X) }}
END ->>
{{ fun st => st Y = power 2 (m + 1) - 1 }}
) % dcom.
Proof. verify. Qed.
(** **** Exercise: 3 stars, advanced (slow_assignment_dec) *)
(** In the [slow_assignment] exercise above, we saw a roundabout way
of assigning a number currently stored in [X] to the variable [Y]:
start [Y] at [0], then decrement [X] until it hits [0],
incrementing [Y] at each step.
Write a _formal_ version of this decorated program and prove it
correct. *)
Example slow_assignment_dec (m : nat) : dcom := (
{{ fun st => st X = m }}
Y ::= ANum 0 ;;;
WHILE BNot (BEq (AId X) (ANum 0)) DO
X ::= adec X ;;;
Y ::= ainc Y
{{ fun st => st X + st Y = m }}
END ->>
{{ fun st => st Y = m }}
) % dcom.
Theorem slow_assignment_dec_correct : forall m,
dec_correct (slow_assignment_dec m).
Proof. unfold slow_assignment_dec. verify. Qed.
(** [] *)
(** **** Exercise: 4 stars, advanced (factorial_dec) *)
(** Remember the factorial function we worked with before: *)
Fixpoint real_fact (n : nat) : nat :=
match n with
| 0 => 1
| S n' => n * real_fact n'
end.
(** Following the pattern of [subtract_slowly_dec], write a decorated
program that implements the factorial function and prove it
correct. *)
Theorem factorial_dec : forall n, dec_correct (
X ::= ANum 0 ;;;
Z ::= ANum 1 ;;;
WHILE BNot (BEq (AId X) (ANum n)) DO
X ::= APlus (AId X) (ANum 1) ;;;
Z ::= AMult (AId Z) (AId X)
{{ fun st => st Z = real_fact (st X) }}
END ->>
{{ fun st => st Z = real_fact n }}
) % dcom.
Proof. verify. ring. Qed.
(** [] *)
Theorem max_in_all : forall n m p, dec_correct ((
X ::= ANum n ;;
Y ::= ANum m ;;
Z ::= ANum p ;;
IFB BLe (AId X) (AId Y) THEN
IFB BLe (AId Y) (AId Z) THEN
X ::= AId Z ;;
Y ::= AId Z
ELSE
X ::= AId Y ;;
Z ::= AId Y
FI
ELSE
IFB BLe (AId X) (AId Z) THEN
X ::= AId Z ;;
Y ::= AId Z
ELSE
Y ::= AId X ;;
Z ::= AId X
FI
FI
) ->>> (( fun st => n <= st X /\ m <= st X /\ p <= st X /\ st X = st Y /\ st Y = st Z ))
).
Proof. verify. Qed.
(* $Date: 2013-07-17 16:19:11 -0400 (Wed, 17 Jul 2013) $ *)
(** * Norm: Normalization of STLC *)
(* $Date: 2013-07-17 16:19:11 -0400 (Wed, 17 Jul 2013) $ *)
(* Chapter maintained by Andrew Tolmach *)
(* (Based on TAPL Ch. 12.) *)
Require Import Stlc.
(**
(This chapter is optional.)
In this chapter, we consider another fundamental theoretical property
of the simply typed lambda-calculus: the fact that the evaluation of a
well-typed program is guaranteed to halt in a finite number of
steps---i.e., every well-typed term is _normalizable_.
Unlike the type-safety properties we have considered so far, the
normalization property does not extend to full-blown programming
languages, because these languages nearly always extend the simply
typed lambda-calculus with constructs, such as general recursion
(as we discussed in the MoreStlc chapter) or recursive types, that can
be used to write nonterminating programs. However, the issue of
normalization reappears at the level of _types_ when we consider the
metatheory of polymorphic versions of the lambda calculus such as
F_omega: in this system, the language of types effectively contains a
copy of the simply typed lambda-calculus, and the termination of the
typechecking algorithm will hinge on the fact that a ``normalization''
operation on type expressions is guaranteed to terminate.
Another reason for studying normalization proofs is that they are some
of the most beautiful---and mind-blowing---mathematics to be found in
the type theory literature, often (as here) involving the fundamental
proof technique of _logical relations_.
The calculus we shall consider here is the simply typed
lambda-calculus over a single base type [bool] and with pairs. We'll
give full details of the development for the basic lambda-calculus
terms treating [bool] as an uninterpreted base type, and leave the
extension to the boolean operators and pairs to the reader. Even for
the base calculus, normalization is not entirely trivial to prove,
since each reduction of a term can duplicate redexes in subterms. *)
(** **** Exercise: 1 star *)
(** Where do we fail if we attempt to prove normalization by a
straightforward induction on the size of a well-typed term? *)
(* FILL IN HERE *)
(** [] *)
(* ###################################################################### *)
(** * Language *)
(** We begin by repeating the relevant language definition, which is
similar to those in the MoreStlc chapter, and supporting results
including type preservation and step determinism. (We won't need
progress.) You may just wish to skip down to the Normalization
section... *)
(* ###################################################################### *)
(** *** Syntax and Operational Semantics *)
Inductive ty : Type :=
| TBool : ty
| TArrow : ty -> ty -> ty
| TProd : ty -> ty -> ty
.
Tactic Notation "T_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "TBool" | Case_aux c "TArrow" | Case_aux c "TProd" ].
Inductive tm : Type :=
(* pure STLC *)
| tvar : id -> tm
| tapp : tm -> tm -> tm
| tabs : id -> ty -> tm -> tm
(* pairs *)
| tpair : tm -> tm -> tm
| tfst : tm -> tm
| tsnd : tm -> tm
(* booleans *)
| ttrue : tm
| tfalse : tm
| tif : tm -> tm -> tm -> tm.
(* i.e., [if t0 then t1 else t2] *)
Tactic Notation "t_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "tvar" | Case_aux c "tapp" | Case_aux c "tabs"
| Case_aux c "tpair" | Case_aux c "tfst" | Case_aux c "tsnd"
| Case_aux c "ttrue" | Case_aux c "tfalse" | Case_aux c "tif" ].
(* ###################################################################### *)
(** *** Substitution *)
Fixpoint subst (x:id) (s:tm) (t:tm) : tm :=
match t with
| tvar y => if eq_id_dec x y then s else t
| tabs y T t1 => tabs y T (if eq_id_dec x y then t1 else (subst x s t1))
| tapp t1 t2 => tapp (subst x s t1) (subst x s t2)
| tpair t1 t2 => tpair (subst x s t1) (subst x s t2)
| tfst t1 => tfst (subst x s t1)
| tsnd t1 => tsnd (subst x s t1)
| ttrue => ttrue
| tfalse => tfalse
| tif t0 t1 t2 => tif (subst x s t0) (subst x s t1) (subst x s t2)
end.
Notation "'[' x ':=' s ']' t" := (subst x s t) (at level 20).
(* ###################################################################### *)
(** *** Reduction *)
Inductive value : tm -> Prop :=
| v_abs : forall x T11 t12,
value (tabs x T11 t12)
| v_pair : forall v1 v2,
value v1 ->
value v2 ->
value (tpair v1 v2)
| v_true : value ttrue
| v_false : value tfalse
.
Hint Constructors value.
Reserved Notation "t1 '==>' t2" (at level 40).
Inductive step : tm -> tm -> Prop :=
| ST_AppAbs : forall x T11 t12 v2,
value v2 ->
(tapp (tabs x T11 t12) v2) ==> [x:=v2]t12
| ST_App1 : forall t1 t1' t2,
t1 ==> t1' ->
(tapp t1 t2) ==> (tapp t1' t2)
| ST_App2 : forall v1 t2 t2',
value v1 ->
t2 ==> t2' ->
(tapp v1 t2) ==> (tapp v1 t2')
(* pairs *)
| ST_Pair1 : forall t1 t1' t2,
t1 ==> t1' ->
(tpair t1 t2) ==> (tpair t1' t2)
| ST_Pair2 : forall v1 t2 t2',
value v1 ->
t2 ==> t2' ->
(tpair v1 t2) ==> (tpair v1 t2')
| ST_Fst : forall t1 t1',
t1 ==> t1' ->
(tfst t1) ==> (tfst t1')
| ST_FstPair : forall v1 v2,
value v1 ->
value v2 ->
(tfst (tpair v1 v2)) ==> v1
| ST_Snd : forall t1 t1',
t1 ==> t1' ->
(tsnd t1) ==> (tsnd t1')
| ST_SndPair : forall v1 v2,
value v1 ->
value v2 ->
(tsnd (tpair v1 v2)) ==> v2
(* booleans *)
| ST_IfTrue : forall t1 t2,
(tif ttrue t1 t2) ==> t1
| ST_IfFalse : forall t1 t2,
(tif tfalse t1 t2) ==> t2
| ST_If : forall t0 t0' t1 t2,
t0 ==> t0' ->
(tif t0 t1 t2) ==> (tif t0' t1 t2)
where "t1 '==>' t2" := (step t1 t2).
Tactic Notation "step_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "ST_AppAbs" | Case_aux c "ST_App1" | Case_aux c "ST_App2"
| Case_aux c "ST_Pair1" | Case_aux c "ST_Pair2"
| Case_aux c "ST_Fst" | Case_aux c "ST_FstPair"
| Case_aux c "ST_Snd" | Case_aux c "ST_SndPair"
| Case_aux c "ST_IfTrue" | Case_aux c "ST_IfFalse" | Case_aux c "ST_If" ].
Notation multistep := (multi step).
Notation "t1 '==>*' t2" := (multistep t1 t2) (at level 40).
Hint Constructors step.
Notation step_normal_form := (normal_form step).
Lemma value__normal : forall t, value t -> step_normal_form t.
Proof with eauto.
intros t H; induction H; intros [t' ST]; inversion ST...
Qed.
(* ###################################################################### *)
(** *** Typing *)
Definition context := partial_map ty.
Inductive has_type : context -> tm -> ty -> Prop :=
(* Typing rules for proper terms *)
| T_Var : forall Gamma x T,
Gamma x = Some T ->
has_type Gamma (tvar x) T
| T_Abs : forall Gamma x T11 T12 t12,
has_type (extend Gamma x T11) t12 T12 ->
has_type Gamma (tabs x T11 t12) (TArrow T11 T12)
| T_App : forall T1 T2 Gamma t1 t2,
has_type Gamma t1 (TArrow T1 T2) ->
has_type Gamma t2 T1 ->
has_type Gamma (tapp t1 t2) T2
(* pairs *)
| T_Pair : forall Gamma t1 t2 T1 T2,
has_type Gamma t1 T1 ->
has_type Gamma t2 T2 ->
has_type Gamma (tpair t1 t2) (TProd T1 T2)
| T_Fst : forall Gamma t T1 T2,
has_type Gamma t (TProd T1 T2) ->
has_type Gamma (tfst t) T1
| T_Snd : forall Gamma t T1 T2,
has_type Gamma t (TProd T1 T2) ->
has_type Gamma (tsnd t) T2
(* booleans *)
| T_True : forall Gamma,
has_type Gamma ttrue TBool
| T_False : forall Gamma,
has_type Gamma tfalse TBool
| T_If : forall Gamma t0 t1 t2 T,
has_type Gamma t0 TBool ->
has_type Gamma t1 T ->
has_type Gamma t2 T ->
has_type Gamma (tif t0 t1 t2) T
.
Hint Constructors has_type.
Tactic Notation "has_type_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "T_Var" | Case_aux c "T_Abs" | Case_aux c "T_App"
| Case_aux c "T_Pair" | Case_aux c "T_Fst" | Case_aux c "T_Snd"
| Case_aux c "T_True" | Case_aux c "T_False" | Case_aux c "T_If" ].
Hint Extern 2 (has_type _ (tapp _ _) _) => eapply T_App; auto.
Hint Extern 2 (_ = _) => compute; reflexivity.
(* ###################################################################### *)
(** *** Context Invariance *)
Inductive appears_free_in : id -> tm -> Prop :=
| afi_var : forall x,
appears_free_in x (tvar x)
| afi_app1 : forall x t1 t2,
appears_free_in x t1 -> appears_free_in x (tapp t1 t2)
| afi_app2 : forall x t1 t2,
appears_free_in x t2 -> appears_free_in x (tapp t1 t2)
| afi_abs : forall x y T11 t12,
y <> x ->
appears_free_in x t12 ->
appears_free_in x (tabs y T11 t12)
(* pairs *)
| afi_pair1 : forall x t1 t2,
appears_free_in x t1 ->
appears_free_in x (tpair t1 t2)
| afi_pair2 : forall x t1 t2,
appears_free_in x t2 ->
appears_free_in x (tpair t1 t2)
| afi_fst : forall x t,
appears_free_in x t ->
appears_free_in x (tfst t)
| afi_snd : forall x t,
appears_free_in x t ->
appears_free_in x (tsnd t)
(* booleans *)
| afi_if0 : forall x t0 t1 t2,
appears_free_in x t0 ->
appears_free_in x (tif t0 t1 t2)
| afi_if1 : forall x t0 t1 t2,
appears_free_in x t1 ->
appears_free_in x (tif t0 t1 t2)
| afi_if2 : forall x t0 t1 t2,
appears_free_in x t2 ->
appears_free_in x (tif t0 t1 t2)
.
Hint Constructors appears_free_in.
Definition closed (t:tm) :=
forall x, ~ appears_free_in x t.
Lemma context_invariance : forall Gamma Gamma' t S,
has_type Gamma t S ->
(forall x, appears_free_in x t -> Gamma x = Gamma' x) ->
has_type Gamma' t S.
Proof with eauto.
intros. generalize dependent Gamma'.
has_type_cases (induction H) Case;
intros Gamma' Heqv...
Case "T_Var".
apply T_Var... rewrite <- Heqv...
Case "T_Abs".
apply T_Abs... apply IHhas_type. intros y Hafi.
unfold extend. destruct (eq_id_dec x y)...
Case "T_Pair".
apply T_Pair...
Case "T_If".
eapply T_If...
Qed.
Lemma free_in_context : forall x t T Gamma,
appears_free_in x t ->
has_type Gamma t T ->
exists T', Gamma x = Some T'.
Proof with eauto.
intros x t T Gamma Hafi Htyp.
has_type_cases (induction Htyp) Case; inversion Hafi; subst...
Case "T_Abs".
destruct IHHtyp as [T' Hctx]... exists T'.
unfold extend in Hctx.
rewrite neq_id in Hctx...
Qed.
Corollary typable_empty__closed : forall t T,
has_type empty t T ->
closed t.
Proof.
intros. unfold closed. intros x H1.
destruct (free_in_context _ _ _ _ H1 H) as [T' C].
inversion C. Qed.
(* ###################################################################### *)
(** *** Preservation *)
Lemma substitution_preserves_typing : forall Gamma x U v t S,
has_type (extend Gamma x U) t S ->
has_type empty v U ->
has_type Gamma ([x:=v]t) S.
Proof with eauto.
(* Theorem: If Gamma,x:U |- t : S and empty |- v : U, then
Gamma |- ([x:=v]t) S. *)
intros Gamma x U v t S Htypt Htypv.
generalize dependent Gamma. generalize dependent S.
(* Proof: By induction on the term t. Most cases follow directly
from the IH, with the exception of tvar and tabs.
The former aren't automatic because we must reason about how the
variables interact. *)
t_cases (induction t) Case;
intros S Gamma Htypt; simpl; inversion Htypt; subst...
Case "tvar".
simpl. rename i into y.
(* If t = y, we know that
[empty |- v : U] and
[Gamma,x:U |- y : S]
and, by inversion, [extend Gamma x U y = Some S]. We want to
show that [Gamma |- [x:=v]y : S].
There are two cases to consider: either [x=y] or [x<>y]. *)
destruct (eq_id_dec x y).
SCase "x=y".
(* If [x = y], then we know that [U = S], and that [[x:=v]y = v].
So what we really must show is that if [empty |- v : U] then
[Gamma |- v : U]. We have already proven a more general version
of this theorem, called context invariance. *)
subst.
unfold extend in H1. rewrite eq_id in H1.
inversion H1; subst. clear H1.
eapply context_invariance...
intros x Hcontra.
destruct (free_in_context _ _ S empty Hcontra) as [T' HT']...
inversion HT'.
SCase "x<>y".
(* If [x <> y], then [Gamma y = Some S] and the substitution has no
effect. We can show that [Gamma |- y : S] by [T_Var]. *)
apply T_Var... unfold extend in H1. rewrite neq_id in H1...
Case "tabs".
rename i into y. rename t into T11.
(* If [t = tabs y T11 t0], then we know that
[Gamma,x:U |- tabs y T11 t0 : T11->T12]
[Gamma,x:U,y:T11 |- t0 : T12]
[empty |- v : U]
As our IH, we know that forall S Gamma,
[Gamma,x:U |- t0 : S -> Gamma |- [x:=v]t0 S].
We can calculate that
[x:=v]t = tabs y T11 (if beq_id x y then t0 else [x:=v]t0)
And we must show that [Gamma |- [x:=v]t : T11->T12]. We know
we will do so using [T_Abs], so it remains to be shown that:
[Gamma,y:T11 |- if beq_id x y then t0 else [x:=v]t0 : T12]
We consider two cases: [x = y] and [x <> y].
*)
apply T_Abs...
destruct (eq_id_dec x y).
SCase "x=y".
(* If [x = y], then the substitution has no effect. Context
invariance shows that [Gamma,y:U,y:T11] and [Gamma,y:T11] are
equivalent. Since the former context shows that [t0 : T12], so
does the latter. *)
eapply context_invariance...
subst.
intros x Hafi. unfold extend.
destruct (eq_id_dec y x)...
SCase "x<>y".
(* If [x <> y], then the IH and context invariance allow us to show that
[Gamma,x:U,y:T11 |- t0 : T12] =>
[Gamma,y:T11,x:U |- t0 : T12] =>
[Gamma,y:T11 |- [x:=v]t0 : T12] *)
apply IHt. eapply context_invariance...
intros z Hafi. unfold extend.
destruct (eq_id_dec y z)...
subst. rewrite neq_id...
Qed.
Theorem preservation : forall t t' T,
has_type empty t T ->
t ==> t' ->
has_type empty t' T.
Proof with eauto.
intros t t' T HT.
(* Theorem: If [empty |- t : T] and [t ==> t'], then [empty |- t' : T]. *)
remember (@empty ty) as Gamma. generalize dependent HeqGamma.
generalize dependent t'.
(* Proof: By induction on the given typing derivation. Many cases are
contradictory ([T_Var], [T_Abs]). We show just the interesting ones. *)
has_type_cases (induction HT) Case;
intros t' HeqGamma HE; subst; inversion HE; subst...
Case "T_App".
(* If the last rule used was [T_App], then [t = t1 t2], and three rules
could have been used to show [t ==> t']: [ST_App1], [ST_App2], and
[ST_AppAbs]. In the first two cases, the result follows directly from
the IH. *)
inversion HE; subst...
SCase "ST_AppAbs".
(* For the third case, suppose
[t1 = tabs x T11 t12]
and
[t2 = v2].
We must show that [empty |- [x:=v2]t12 : T2].
We know by assumption that
[empty |- tabs x T11 t12 : T1->T2]
and by inversion
[x:T1 |- t12 : T2]
We have already proven that substitution_preserves_typing and
[empty |- v2 : T1]
by assumption, so we are done. *)
apply substitution_preserves_typing with T1...
inversion HT1...
Case "T_Fst".
inversion HT...
Case "T_Snd".
inversion HT...
Qed.
(** [] *)
(* ###################################################################### *)
(** *** Determinism *)
Lemma novalue : forall t t',
t ==> t' -> ~ value t.
Proof. unfold not. intros. induction H; inversion H0; eauto. Qed.
Ltac find_step :=
match goal with
| H: _ ==> _ |- _ => apply novalue in H
end.
Lemma step_deterministic :
deterministic step.
Proof.
unfold deterministic. intros. generalize dependent y2.
induction H; intros y2 H'; inversion H'; subst;
eauto using f_equal, f_equal2, f_equal3; repeat find_step; exfalso; eauto.
Qed.
(* ###################################################################### *)
(** * Normalization *)
(** Now for the actual normalization proof.
Our goal is to prove that every well-typed term evaluates to a
normal form. In fact, it turns out to be convenient to prove
something slightly stronger, namely that every well-typed term
evaluates to a _value_. This follows from the weaker property
anyway via the Progress lemma (why?) but otherwise we don't need
Progress, and we didn't bother re-proving it above.
Here's the key definition: *)
Definition halts (t:tm) : Prop := exists t', t ==>* t' /\ value t'.
(** A trivial fact: *)
Lemma value_halts : forall v, value v -> halts v.
Proof.
intros v H. unfold halts.
exists v. split.
apply multi_refl.
assumption.
Qed.
(** The key issue in the normalization proof (as in many proofs by
induction) is finding a strong enough induction hypothesis. To this
end, we begin by defining, for each type [T], a set [R_T] of closed
terms of type [T]. We will specify these sets using a relation [R]
and write [R T t] when [t] is in [R_T]. (The sets [R_T] are sometimes
called _saturated sets_ or _reducibility candidates_.)
Here is the definition of [R] for the base language:
- [R bool t] iff [t] is a closed term of type [bool] and [t] halts in a value
- [R (T1 -> T2) t] iff [t] is a closed term of type [T1 -> T2] and [t] halts
in a value _and_ for any term [s] such that [R T1 s], we have [R
T2 (t s)]. *)
(** This definition gives us the strengthened induction hypothesis that we
need. Our primary goal is to show that all _programs_ ---i.e., all
closed terms of base type---halt. But closed terms of base type can
contain subterms of functional type, so we need to know something
about these as well. Moreover, it is not enough to know that these
subterms halt, because the application of a normalized function to a
normalized argument involves a substitution, which may enable more
evaluation steps. So we need a stronger condition for terms of
functional type: not only should they halt themselves, but, when
applied to halting arguments, they should yield halting results.
The form of [R] is characteristic of the _logical relations_ proof
technique. (Since we are just dealing with unary relations here, we
could perhaps more properly say _logical predicates_.) If we want to
prove some property [P] of all closed terms of type [A], we proceed by
proving, by induction on types, that all terms of type [A] _possess_
property [P], all terms of type [A->A] _preserve_ property [P], all
terms of type [(A->A)->(A->A)] _preserve the property of preserving_
property [P], and so on. We do this by defining a family of
predicates, indexed by types. For the base type [A], the predicate is
just [P]. For functional types, it says that the function should map
values satisfying the predicate at the input type to values satisfying
the predicate at the output type.
When we come to formalize the definition of [R] in Coq, we hit a
problem. The most obvious formulation would be as a parameterized
Inductive proposition like this:
Inductive R : ty -> tm -> Prop :=
| R_bool : forall b t, has_type empty t TBool ->
halts t ->
R TBool t
| R_arrow : forall T1 T2 t, has_type empty t (TArrow T1 T2) ->
halts t ->
(forall s, R T1 s -> R T2 (tapp t s)) ->
R (TArrow T1 T2) t.
Unfortunately, Coq rejects this definition because it violates the
_strict positivity requirement_ for inductive definitions, which says
that the type being defined must not occur to the left of an arrow in
the type of a constructor argument. Here, it is the third argument to
[R_arrow], namely [(forall s, R T1 s -> R TS (tapp t s))], and
specifically the [R T1 s] part, that violates this rule. (The
outermost arrows separating the constructor arguments don't count when
applying this rule; otherwise we could never have genuinely inductive
predicates at all!) The reason for the rule is that types defined
with non-positive recursion can be used to build non-terminating
functions, which as we know would be a disaster for Coq's logical
soundness. Even though the relation we want in this case might be
perfectly innocent, Coq still rejects it because it fails the
positivity test.
Fortunately, it turns out that we _can_ define [R] using a
[Fixpoint]: *)
Fixpoint R (T:ty) (t:tm) {struct T} : Prop :=
has_type empty t T /\ halts t /\
(match T with
| TBool => True
| TArrow T1 T2 => (forall s, R T1 s -> R T2 (tapp t s))
| TProd T1 T2 => R T1 (tfst t) /\ R T2 (tsnd t)
end).
(** As immediate consequences of this definition, we have that every
element of every set [R_T] halts in a value and is closed with type
[t] :*)
Lemma R_halts : forall {T} {t}, R T t -> halts t.
Proof.
intros. destruct T; unfold R in H; inversion H; inversion H1; assumption.
Qed.
Lemma R_typable_empty : forall {T} {t}, R T t -> has_type empty t T.
Proof.
intros. destruct T; unfold R in H; inversion H; inversion H1; assumption.
Qed.
(** Now we proceed to show the main result, which is that every
well-typed term of type [T] is an element of [R_T]. Together with
[R_halts], that will show that every well-typed term halts in a
value. *)
(* ###################################################################### *)
(** ** Membership in [R_T] is invariant under evaluation *)
(** We start with a preliminary lemma that shows a kind of strong
preservation property, namely that membership in [R_T] is _invariant_
under evaluation. We will need this property in both directions,
i.e. both to show that a term in [R_T] stays in [R_T] when it takes a
forward step, and to show that any term that ends up in [R_T] after a
step must have been in [R_T] to begin with.
First of all, an easy preliminary lemma. Note that in the forward
direction the proof depends on the fact that our language is
determinstic. This lemma might still be true for non-deterministic
languages, but the proof would be harder! *)
Lemma step_preserves_halting : forall t t', (t ==> t') -> (halts t <-> halts t').
Proof.
intros t t' ST. unfold halts.
split.
Case "->".
intros [t'' [STM V]].
inversion STM; subst.
apply ex_falso_quodlibet. apply value__normal in V. unfold normal_form in V. apply V. exists t'. auto.
rewrite (step_deterministic _ _ _ ST H). exists t''. split; assumption.
Case "<-".
intros [t'0 [STM V]].
exists t'0. split; eauto.
Qed.
(** Now the main lemma, which comes in two parts, one for each
direction. Each proceeds by induction on the structure of the type
[T]. In fact, this is where we make fundamental use of the
finiteness of types.
One requirement for staying in [R_T] is to stay in type [T]. In the
forward direction, we get this from ordinary type Preservation. *)
Lemma step_preserves_R : forall T t t', (t ==> t') -> R T t -> R T t'.
Proof.
induction T; intros t t' E Rt; unfold R; fold R; unfold R in Rt; fold R in Rt;
destruct Rt as [typable_empty_t [halts_t RRt]].
(* TBool *)
split. eapply preservation; eauto.
split. apply (step_preserves_halting _ _ E); eauto.
auto.
(* TArrow *)
split. eapply preservation; eauto.
split. apply (step_preserves_halting _ _ E); eauto.
intros.
eapply IHT2.
apply ST_App1. apply E.
apply RRt; auto.
repeat split; eauto using preservation.
rewrite <- step_preserves_halting; eassumption.
inversion RRt. eauto.
inversion RRt. eauto.
Qed.
(** The generalization to multiple steps is trivial: *)
Lemma multistep_preserves_R : forall T t t',
(t ==>* t') -> R T t -> R T t'.
Proof.
intros T t t' STM; induction STM; intros.
assumption.
apply IHSTM. eapply step_preserves_R. apply H. assumption.
Qed.
(** In the reverse direction, we must add the fact that [t] has type
[T] before stepping as an additional hypothesis. *)
Lemma step_preserves_R' : forall T t t',
has_type empty t T -> (t ==> t') -> R T t' -> R T t.
Proof.
intros T. induction T; simpl; intuition; try rewrite step_preserves_halting;
eauto 6 using R_typable_empty.
Qed.
Lemma multistep_preserves_R' : forall T t t',
has_type empty t T -> (t ==>* t') -> R T t' -> R T t.
Proof.
intros T t t' HT STM.
induction STM; intros.
assumption.
eapply step_preserves_R'. assumption. apply H. apply IHSTM.
eapply preservation; eauto. auto.
Qed.
(* ###################################################################### *)
(** ** Closed instances of terms of type [T] belong to [R_T] *)
(** Now we proceed to show that every term of type [T] belongs to
[R_T]. Here, the induction will be on typing derivations (it would be
surprising to see a proof about well-typed terms that did not
somewhere involve induction on typing derivations!). The only
technical difficulty here is in dealing with the abstraction case.
Since we are arguing by induction, the demonstration that a term
[tabs x T1 t2] belongs to [R_(T1->T2)] should involve applying the
induction hypothesis to show that [t2] belongs to [R_(T2)]. But
[R_(T2)] is defined to be a set of _closed_ terms, while [t2] may
contain [x] free, so this does not make sense.
This problem is resolved by using a standard trick to suitably
generalize the induction hypothesis: instead of proving a statement
involving a closed term, we generalize it to cover all closed
_instances_ of an open term [t]. Informally, the statement of the
lemma will look like this:
If [x1:T1,..xn:Tn |- t : T] and [v1,...,vn] are values such that
[R T1 v1], [R T2 v2], ..., [R Tn vn], then
[R T ([x1:=v1][x2:=v2]...[xn:=vn]t)].
The proof will proceed by induction on the typing derivation
[x1:T1,..xn:Tn |- t : T]; the most interesting case will be the one
for abstraction. *)
(* ###################################################################### *)
(** *** Multisubstitutions, multi-extensions, and instantiations *)
(** However, before we can proceed to formalize the statement and
proof of the lemma, we'll need to build some (rather tedious)
machinery to deal with the fact that we are performing _multiple_
substitutions on term [t] and _multiple_ extensions of the typing
context. In particular, we must be precise about the order in which
the substitutions occur and how they act on each other. Often these
details are simply elided in informal paper proofs, but of course Coq
won't let us do that. Since here we are substituting closed terms, we
don't need to worry about how one substitution might affect the term
put in place by another. But we still do need to worry about the
_order_ of substitutions, because it is quite possible for the same
identifier to appear multiple times among the [x1,...xn] with
different associated [vi] and [Ti].
To make everything precise, we will assume that environments are
extended from left to right, and multiple substitutions are performed
from right to left. To see that this is consistent, suppose we have
an environment written as [...,y:bool,...,y:nat,...] and a
corresponding term substitution written as [...[y:=(tbool
true)]...[y:=(tnat 3)]...t]. Since environments are extended from
left to right, the binding [y:nat] hides the binding [y:bool]; since
substitutions are performed right to left, we do the substitution
[y:=(tnat 3)] first, so that the substitution [y:=(tbool true)] has
no effect. Substitution thus correctly preserves the type of the term.
With these points in mind, the following definitions should make sense.
A _multisubstitution_ is the result of applying a list of
substitutions, which we call an _environment_. *)
Definition env := list (id * tm).
Fixpoint msubst (ss:env) (t:tm) {struct ss} : tm :=
match ss with
| nil => t
| ((x,s)::ss') => msubst ss' ([x:=s]t)
end.
(** We need similar machinery to talk about repeated extension of a
typing context using a list of (identifier, type) pairs, which we
call a _type assignment_. *)
Definition tass := list (id * ty).
Fixpoint mextend (Gamma : context) (xts : tass) :=
match xts with
| nil => Gamma
| ((x,v)::xts') => extend (mextend Gamma xts') x v
end.
(** We will need some simple operations that work uniformly on
environments and type assigments *)
Fixpoint lookup {X:Set} (k : id) (l : list (id * X)) {struct l} : option X :=
match l with
| nil => None
| (j,x) :: l' =>
if eq_id_dec j k then Some x else lookup k l'
end.
Fixpoint drop {X:Set} (n:id) (nxs:list (id * X)) {struct nxs} : list (id * X) :=
match nxs with
| nil => nil
| ((n',x)::nxs') => if eq_id_dec n' n then drop n nxs' else (n',x)::(drop n nxs')
end.
(** An _instantiation_ combines a type assignment and a value
environment with the same domains, where corresponding elements are
in R *)
Inductive instantiation : tass -> env -> Prop :=
| V_nil : instantiation nil nil
| V_cons : forall x T v c e, value v -> R T v -> instantiation c e -> instantiation ((x,T)::c) ((x,v)::e).
(** We now proceed to prove various properties of these definitions. *)
(* ###################################################################### *)
(** *** More Substitution Facts *)
(** First we need some additional lemmas on (ordinary) substitution. *)
Lemma vacuous_substitution : forall t x,
~ appears_free_in x t ->
forall t', [x:=t']t = t.
Proof.
unfold not. intros. induction t; simpl; intros; try apply f_equal3;
auto using f_equal, f_equal2; destruct (eq_id_dec x i); auto.
subst. exfalso. auto.
Qed.
Lemma subst_closed: forall t,
closed t ->
forall x t', [x:=t']t = t.
Proof.
intros. apply vacuous_substitution. apply H. Qed.
Lemma subst_not_afi : forall t x v, closed v -> ~ appears_free_in x ([x:=v]t).
Proof with eauto. (* rather slow this way *)
unfold closed, not.
t_cases (induction t) Case; intros x v P A; simpl in A.
Case "tvar".
destruct (eq_id_dec x i)...
inversion A; subst. auto.
Case "tapp".
inversion A; subst...
Case "tabs".
destruct (eq_id_dec x i)...
inversion A; subst...
inversion A; subst...
Case "tpair".
inversion A; subst...
Case "tfst".
inversion A; subst...
Case "tsnd".
inversion A; subst...
Case "ttrue".
inversion A.
Case "tfalse".
inversion A.
Case "tif".
inversion A; subst...
Qed.
Lemma duplicate_subst : forall t' x t v,
closed v -> [x:=t]([x:=v]t') = [x:=v]t'.
Proof.
intros. eapply vacuous_substitution. apply subst_not_afi. auto.
Qed.
Lemma swap_subst : forall t x x1 v v1, x <> x1 -> closed v -> closed v1 ->
[x1:=v1]([x:=v]t) = [x:=v]([x1:=v1]t).
Proof with eauto.
t_cases (induction t) Case; intros; simpl; eauto using f_equal, f_equal2, f_equal3.
Case "tvar".
destruct (eq_id_dec x i); destruct (eq_id_dec x1 i).
subst. apply ex_falso_quodlibet...
subst. simpl. rewrite eq_id. apply subst_closed...
subst. simpl. rewrite eq_id. rewrite subst_closed...
simpl. rewrite neq_id... rewrite neq_id...
apply f_equal3; eauto. destruct (eq_id_dec x i); eauto. destruct (eq_id_dec x1 i); eauto.
Qed.
(* ###################################################################### *)
(** *** Properties of multi-substitutions *)
Lemma msubst_closed: forall t, closed t -> forall ss, msubst ss t = t.
Proof.
induction ss.
reflexivity.
destruct a. simpl. rewrite subst_closed; assumption.
Qed.
(** Closed environments are those that contain only closed terms. *)
Fixpoint closed_env (env:env) {struct env} :=
match env with
| nil => True
| (x,t)::env' => closed t /\ closed_env env'
end.
(** Next come a series of lemmas charcterizing how [msubst] of closed terms
distributes over [subst] and over each term form *)
Lemma subst_msubst: forall env x v t, closed v -> closed_env env ->
msubst env ([x:=v]t) = [x:=v](msubst (drop x env) t).
Proof.
induction env0; intros.
auto.
destruct a. simpl.
inversion H0. fold closed_env in H2.
destruct (eq_id_dec i x).
subst. rewrite duplicate_subst; auto.
simpl. rewrite swap_subst; eauto.
Qed.
Lemma msubst_var: forall ss x, closed_env ss ->
msubst ss (tvar x) =
match lookup x ss with
| Some t => t
| None => tvar x
end.
Proof.
induction ss; intros.
reflexivity.
destruct a.
simpl. destruct (eq_id_dec i x).
apply msubst_closed. inversion H; auto.
apply IHss. inversion H; auto.
Qed.
Lemma msubst_abs: forall ss x T t,
msubst ss (tabs x T t) = tabs x T (msubst (drop x ss) t).
Proof.
induction ss; intros.
reflexivity.
destruct a.
simpl. destruct (eq_id_dec i x); simpl; auto.
Qed.
Lemma msubst_app : forall ss t1 t2, msubst ss (tapp t1 t2) = tapp (msubst ss t1) (msubst ss t2).
Proof.
induction ss; intros.
reflexivity.
destruct a.
simpl. rewrite <- IHss. auto.
Qed.
(** You'll need similar functions for the other term constructors. *)
(* FILL IN HERE *)
(* ###################################################################### *)
(** *** Properties of multi-extensions *)
(** We need to connect the behavior of type assignments with that of their
corresponding contexts. *)
Lemma mextend_lookup : forall (c : tass) (x:id), lookup x c = (mextend empty c) x.
Proof.
induction c; intros.
auto.
destruct a. unfold lookup, mextend, extend. destruct (eq_id_dec i x); auto.
Qed.
Lemma mextend_drop : forall (c: tass) Gamma x x',
mextend Gamma (drop x c) x' = if eq_id_dec x x' then Gamma x' else mextend Gamma c x'.
induction c; intros.
destruct (eq_id_dec x x'); auto.
destruct a. simpl.
destruct (eq_id_dec i x).
subst. rewrite IHc.
destruct (eq_id_dec x x'). auto. unfold extend. rewrite neq_id; auto.
simpl. unfold extend. destruct (eq_id_dec i x').
subst.
destruct (eq_id_dec x x').
subst. exfalso. auto.
auto.
auto.
Qed.
(* ###################################################################### *)
(** *** Properties of Instantiations *)
(** These are strightforward. *)
Lemma instantiation_domains_match: forall {c} {e},
instantiation c e -> forall {x} {T}, lookup x c = Some T -> exists t, lookup x e = Some t.
Proof.
intros c e V. induction V; intros x0 T0 C.
solve by inversion .
simpl in *.
destruct (eq_id_dec x x0); eauto.
Qed.
Lemma instantiation_env_closed : forall c e, instantiation c e -> closed_env e.
Proof.
intros c e V; induction V; intros.
econstructor.
unfold closed_env. fold closed_env.
split. eapply typable_empty__closed. eapply R_typable_empty. eauto.
auto.
Qed.
Lemma instantiation_R : forall c e, instantiation c e ->
forall x t T, lookup x c = Some T ->
lookup x e = Some t -> R T t.
Proof.
intros c e V. induction V; intros x' t' T' G E.
solve by inversion.
unfold lookup in *. destruct (eq_id_dec x x').
inversion G; inversion E; subst. auto.
eauto.
Qed.
Lemma instantiation_drop : forall c env,
instantiation c env -> forall x, instantiation (drop x c) (drop x env).
Proof.
intros c e V. induction V.
intros. simpl. constructor.
intros. unfold drop. destruct (eq_id_dec x x0); auto. constructor; eauto.
Qed.
(* ###################################################################### *)
(** *** Congruence lemmas on multistep *)
(** We'll need just a few of these; add them as the demand arises. *)
Lemma multistep_App2 : forall v t t',
value v -> (t ==>* t') -> (tapp v t) ==>* (tapp v t').
Proof.
intros v t t' V STM. induction STM.
apply multi_refl.
eapply multi_step.
apply ST_App2; eauto. auto.
Qed.
(* FILL IN HERE *)
(* ###################################################################### *)
(** *** The R Lemma. *)
(** We finally put everything together.
The key lemma about preservation of typing under substitution can
be lifted to multi-substitutions: *)
Lemma msubst_preserves_typing : forall c e,
instantiation c e ->
forall Gamma t S, has_type (mextend Gamma c) t S ->
has_type Gamma (msubst e t) S.
Proof.
induction 1; intros.
simpl in H. simpl. auto.
simpl in H2. simpl.
apply IHinstantiation.
eapply substitution_preserves_typing; eauto.
apply (R_typable_empty H0).
Qed.
(** And at long last, the main lemma. *)
Lemma multi_preservation : forall t t' T,
has_type empty t T ->
t ==>* t' ->
has_type empty t' T.
Proof. intros. induction H0; eauto using preservation. Qed.
Lemma msubst_true_false : forall env t,
t = ttrue \/ t = tfalse -> msubst env t = t.
Proof.
intros env. induction env; intros.
eauto.
inversion H; subst; simpl; destruct a; rewrite IHenv; eauto.
Qed.
Lemma msubst_pair : forall env t1 t2,
msubst env (tpair t1 t2) = tpair (msubst env t1) (msubst env t2).
Proof.
intros env. induction env; intros.
eauto.
simpl. destruct a. rewrite IHenv. eauto.
Qed.
Lemma msubst_fst_snd : forall env tp t,
tp = tfst \/ tp = tsnd -> msubst env (tp t) = tp (msubst env t).
Proof.
intros env. induction env; intros.
eauto.
inversion H; subst; simpl; destruct a; rewrite IHenv; eauto.
Qed.
Lemma msubst_if : forall env t0 t1 t2,
msubst env (tif t0 t1 t2) = tif (msubst env t0) (msubst env t1) (msubst env t2).
Proof.
intros env. induction env; intros.
eauto.
simpl. destruct a. rewrite IHenv. eauto.
Qed.
Lemma R_typable_empty_halts : forall {T t},
R T t -> has_type empty t T /\ halts t.
Proof. eauto using R_typable_empty, R_halts. Qed.
Lemma halts_v_pair : forall v1 t2,
value v1 -> halts t2 -> halts (tpair v1 t2).
Proof.
unfold halts. intros. inversion H0 as [t2' [step_t2' val_t2']]. clear H0. induction step_t2'.
eauto.
destruct (IHstep_t2' val_t2') as [t' [step_t' val_t']]. eauto.
Qed.
Lemma halts_pair : forall t1 t2,
halts t1 -> halts t2 -> halts (tpair t1 t2).
Proof.
intros. inversion H as [t1' [step_t1' val_t1']]. clear H. induction step_t1'.
apply halts_v_pair; auto.
unfold halts in *. destruct (IHstep_t1' val_t1') as [t' [step_t' val_t']]. eauto.
Qed.
Lemma mstep_t1_tp_pair : forall tp t1 t1' t2,
tp = tfst \/ tp = tsnd ->
t1 ==>* t1' ->
tp (tpair t1 t2) ==>* tp (tpair t1' t2).
Proof. intros. induction H0; eauto; inversion H; subst; eauto. Qed.
Lemma mstep_t2_tp_pair : forall tp v1 t2 t2',
tp = tfst \/ tp = tsnd ->
value v1 ->
t2 ==>* t2' ->
tp (tpair v1 t2) ==>* tp (tpair v1 t2').
Proof. intros. induction H1; eauto; inversion H; subst; eauto. Qed.
Lemma mstep_t0_tif : forall t0 t0' t1 t2,
t0 ==>* t0' ->
tif t0 t1 t2 ==>* tif t0' t1 t2.
Proof. intros. induction H; eauto. Qed.
Lemma pair_revsevs_R : forall t1 t2 T1 T2,
R T1 t1 ->
R T2 t2 ->
R (TProd T1 T2) (tpair t1 t2).
Proof.
intros t1 t2 T1 T2 HR1 HR2. simpl.
( destruct (R_typable_empty_halts HR1) as [HT1 [t1' [mstep_t1' val_t1']]]
; destruct (R_typable_empty_halts HR2) as [HT2 [t2' [mstep_t2' val_t2']]]
). repeat split; eauto using R_halts, halts_pair;
[ apply multistep_preserves_R' with t1'
| apply multistep_preserves_R' with t2'
]; eauto using multistep_preserves_R;
( eapply multi_trans; try apply mstep_t1_tp_pair; eauto
; eapply multi_trans; try apply mstep_t2_tp_pair; eauto
).
Qed.
Lemma if_revsevs_R : forall t0 t1 t2 T,
R TBool t0 ->
R T t1 ->
R T t2 ->
R T (tif t0 t1 t2).
Proof.
intros t0 t1 t2 T HR0 HR1 HR2.
( destruct (R_typable_empty_halts HR0) as [HT0 [t0' [mstep_t0' val_t0']]]
; destruct (R_typable_empty_halts HR1) as [HT1 [t1' [mstep_t1' val_t1']]]
; destruct (R_typable_empty_halts HR2) as [HT2 [t2' [mstep_t2' val_t2']]]
). inversion val_t0'; subst;
try solve [apply (multi_preservation _ _ TBool) in mstep_t0'; eauto; inversion mstep_t0'];
[ apply multistep_preserves_R' with t1'
| apply multistep_preserves_R' with t2'
]; eauto using multistep_preserves_R, multi_trans, mstep_t0_tif.
Qed.
Lemma msubst_R : forall c env t T,
has_type (mextend empty c) t T -> instantiation c env -> R T (msubst env t).
Proof.
intros c env0 t T HT V.
generalize dependent env0.
(* We need to generalize the hypothesis a bit before setting up the induction. *)
remember (mextend empty c) as Gamma.
assert (forall x, Gamma x = lookup x c).
intros. rewrite HeqGamma. rewrite mextend_lookup. auto.
clear HeqGamma.
generalize dependent c.
has_type_cases (induction HT) Case; intros.
Case "T_Var".
rewrite H0 in H. destruct (instantiation_domains_match V H) as [t P].
eapply instantiation_R; eauto.
rewrite msubst_var. rewrite P. auto. eapply instantiation_env_closed; eauto.
Case "T_Abs".
rewrite msubst_abs.
(* We'll need variants of the following fact several times, so its simplest to
establish it just once. *)
assert (WT: has_type empty (tabs x T11 (msubst (drop x env0) t12)) (TArrow T11 T12)).
eapply T_Abs. eapply msubst_preserves_typing. eapply instantiation_drop; eauto.
eapply context_invariance. apply HT.
intros.
unfold extend. rewrite mextend_drop. destruct (eq_id_dec x x0). auto.
rewrite H.
clear - c n. induction c.
simpl. rewrite neq_id; auto.
simpl. destruct a. unfold extend. destruct (eq_id_dec i x0); auto.
unfold R. fold R. split.
auto.
split. apply value_halts. apply v_abs.
intros.
destruct (R_halts H0) as [v [P Q]].
pose proof (multistep_preserves_R _ _ _ P H0).
apply multistep_preserves_R' with (msubst ((x,v)::env0) t12).
eapply T_App. eauto.
apply R_typable_empty; auto.
eapply multi_trans. eapply multistep_App2; eauto.
eapply multi_R.
simpl. rewrite subst_msubst.
eapply ST_AppAbs; eauto.
eapply typable_empty__closed.
apply (R_typable_empty H1).
eapply instantiation_env_closed; eauto.
eapply (IHHT ((x,T11)::c)).
intros. unfold extend, lookup. destruct (eq_id_dec x x0); auto.
constructor; auto.
Case "T_App".
rewrite msubst_app.
destruct (IHHT1 c H env0 V) as [_ [_ P1]].
pose proof (IHHT2 c H env0 V) as P2. fold R in P1. auto.
rewrite msubst_pair.
( assert (P1 := IHHT1 c H env0 V);
assert (P2 := IHHT2 c H env0 V)
). eauto using pair_revsevs_R.
assert (H' := IHHT c H env0 V). simpl in H'. rewrite msubst_fst_snd; intuition.
assert (H' := IHHT c H env0 V). simpl in H'. rewrite msubst_fst_snd; intuition.
unfold R. rewrite msubst_true_false; intuition. eauto using value_halts.
unfold R. rewrite msubst_true_false; intuition. eauto using value_halts.
rewrite msubst_if.
( assert (P1 := IHHT1 c H env0 V);
assert (P2 := IHHT2 c H env0 V);
assert (P3 := IHHT3 c H env0 V)
). eauto using if_revsevs_R.
Qed.
(* ###################################################################### *)
(** *** Normalization Theorem *)
Theorem normalization : forall t T, has_type empty t T -> halts t.
Proof.
intros.
replace t with (msubst nil t).
eapply R_halts.
eapply msubst_R; eauto. instantiate (2:= nil). eauto.
eapply V_nil.
auto.
Qed.
Inductive pal {X} : list X -> Prop :=
| pnil : pal nil
| pone : forall (v : X), pal [v]
| pind : forall (v : X) (l : list X), pal l -> pal (v :: snoc l v).
Theorem pal_rev : forall X (l : list X),
pal (l ++ rev l).
Proof.
intros X l. induction l as [| v' l'].
apply pnil.
simpl. rewrite <- snoc_with_append. apply pind. apply IHl'.
Qed.
Theorem pal_impl_eq_rev : forall X (l : list X),
pal l -> l = rev l.
Proof.
intros X l H. induction H as [| v' | v' l'].
reflexivity.
reflexivity.
simpl. rewrite rev_snoc. rewrite <- IHpal. reflexivity.
Qed.
Theorem plus_to_minus : forall (n m p: nat),
n = m + p -> n - m = p.
Proof.
intros n. induction n as [| n'].
destruct m as [| m'].
intros p H. apply H.
intros p H. inversion H.
destruct m as [| m'].
intros p H. apply H.
intros p H. apply IHn'. inversion H. reflexivity.
Qed.
Theorem minus0 : forall n : nat,
n - 0 = n.
Proof. intros n. apply plus_to_minus. reflexivity. Qed.
Theorem minus_eq_S : forall n m p,
n - m = S p -> n - S m = p.
Proof.
intros n. induction n as [| n'].
intros m p H. inversion H.
intros m. destruct m as [| m'] eqn : d.
intros p H. inversion H. apply minus0.
intros p H. apply IHn'. inversion H. reflexivity.
Qed.
Theorem nat_halves : forall (n : nat),
div2 n + div2 (S n) = n.
Proof.
intros n. induction n as [| n'].
reflexivity.
rewrite plus_comm. simpl. apply f_equal. apply IHn'.
Qed.
Theorem minus_half : forall (n : nat),
n - div2 n = div2 (S n).
Proof. intros n. apply plus_to_minus. rewrite nat_halves. reflexivity. Qed.
Theorem length_snoc : forall X (v : X) (l : list X),
length (snoc l v) = S (length l).
Proof.
intros X v l. induction l as [| v' l'].
reflexivity.
simpl. rewrite IHl'. reflexivity.
Qed.
Theorem length_rev : forall X (l : list X),
length (rev l) = length l.
Proof.
intros X l. induction l as [| v' l'].
reflexivity.
simpl. rewrite length_snoc. rewrite IHl'. reflexivity.
Qed.
Fixpoint take (X : Type) (n : nat) (l : list X) : (list X) :=
match n with
| O => []
| S n' => match l with
| [] => []
| v' :: l' => v' :: take X n' l'
end
end.
Theorem take_nil : forall X (n : nat),
take X n [] = [].
Proof. intros X n. destruct n as [| n']. reflexivity. reflexivity. Qed.
Theorem take_length : forall X (l : list X),
take X (length l) l = l.
Proof. intros X l. induction l as [| v' l']. reflexivity. simpl. rewrite IHl'. reflexivity. Qed.
Theorem take_less_snoc : forall X (l : list X) (v : X) (n : nat),
take X (length l - n) (snoc l v) = take X (length l - n) l.
Proof.
intros X l. induction l as [| v' l'].
intros v n. reflexivity.
intros v n. destruct (length (v' :: l') - n) as [| m'] eqn : d.
reflexivity.
apply minus_eq_S in d. rewrite <- d. simpl. rewrite IHl'. reflexivity.
Qed.
Theorem take_rev_take_rev : forall X (n : nat) (l : list X),
take X n l ++ rev (take X (length l - n) (rev l)) = l.
Proof.
intros X n. induction n as [| n'].
intros l. simpl. rewrite minus0. rewrite <- length_rev.
rewrite take_length. rewrite rev_involutive. reflexivity.
intros l. destruct l as [| v' l'].
reflexivity.
simpl. apply f_equal. rewrite <- length_rev. rewrite take_less_snoc.
rewrite length_rev. apply IHn'.
Qed.
Lemma pal_halves : forall X (l : list X) (n : nat),
pal (take X (div2 n) l ++ rev (take X (div2 (S n)) l)).
Proof.
intros X l. induction l as [| v' l'].
intros n. rewrite take_nil. rewrite take_nil. apply pnil.
intros n. destruct n as [| n'].
apply pnil.
destruct n' as [| n''].
apply pone.
simpl. rewrite <- snoc_with_append. apply pind. apply IHl'.
Qed.
Theorem id_eq_rev_imp_pal : forall X (l : list X),
l = rev l -> pal l.
Proof.
intros X l H. rewrite <- take_rev_take_rev with (n := div2 (length l)). rewrite minus_half.
apply f_equal with (f := take X (div2 (length l))) in H. rewrite H. apply pal_halves.
Qed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment