Library Hoare2
Decorated Programs
- The sequential composition of c1 and c2 is locally consistent (with respect to assertions P and R) if c1 is locally consistent (with respect to P and Q) and c2 is locally consistent (with respect to Q and R): P c1; Q c2 R
- An assignment is locally consistent if its precondition is the appropriate substitution of its postcondition: P [X |-> a] X ::= a P
- A conditional is locally consistent (with respect to assertions P and Q) if the assertions at the top of its "then" and "else" branches are exactly P ∧ b and P ∧ ¬b and if its "then" branch is locally consistent (with respect to P ∧ b and Q) and its "else" branch is locally consistent (with respect to P ∧ ¬b and Q): P IFB b THEN P /\ b c1 Q ELSE P /\ ~b c2 Q FI Q
- A while loop with precondition P is locally consistent if its postcondition is P ∧ ¬b and if the pre- and postconditions of its body are exactly P ∧ b and P: P WHILE b DO P /\ b c1 P END P /\ ~b
- A pair of assertions separated by ->> is locally consistent if
the first implies the second (in all states):
P ->>
P'
Example: Swapping Using Addition and Subtraction
- We begin with the undecorated program (the unnumbered lines).
- We then add the specification -- i.e., the outer precondition (1) and postcondition (5). In the precondition we use auxiliary variables (parameters) m and n to remember the initial values of variables X and respectively Y, so that we can refer to them in the postcondition (5).
- We work backwards mechanically starting from (5) all the way to (2). At each step, we obtain the precondition of the assignment from its postcondition by substituting the assigned variable with the right-hand-side of the assignment. For instance, we obtain (4) by substituting X with X - Y in (5), and (3) by substituting Y with X - Y in (4).
- Finally, we verify that (1) logically implies (2) -- i.e., that the step from (1) to (2) is a valid use of the law of consequence. For this we substitute X by m and Y by n and calculate as follows: (m + n) - ((m + n) - n) = n /\ (m + n) - n = m (m + n) - m = n /\ m = m n = n /\ m = m
Example: Simple Conditionals
- We start with the outer precondition (1) and postcondition (8).
- We follow the format dictated by the hoare_if rule and copy the postcondition (8) to (4) and (7). We conjoin the precondition (1) with the guard of the conditional to obtain (2). We conjoin (1) with the negated guard of the conditional to obtain (5).
- In order to use the assignment rule and obtain (3), we substitute Z by Y - X in (4). To obtain (6) we substitute Z by X - Y in (7).
- Finally, we verify that (2) implies (3) and (5) implies (6). Both of these implications crucially depend on the ordering of X and Y obtained from the guard. For instance, knowing that X ≤ Y ensures that subtracting X from Y and then adding back X produces Y, as required by the first disjunct of (3). Similarly, knowing that ~(X ≤ Y) ensures that subtracting Y from X and then adding back Y produces X, as needed by the second disjunct of (6). Note that n - m + m = n does not hold for arbitrary natural numbers n and m (for example, 3 - 5 + 5 = 5).
Exercise: 2 stars (if_minus_plus_reloaded)
Fill in valid decorations for the following program: True IFB X <= Y THEN ->> Z ::= Y - X ELSE ->> Y ::= X + Z FI Y = X + ZExample: Reduce to Zero (Trivial Loop)
- Start with the outer precondition (1) and postcondition (6).
- Following the format dictated by the hoare_while rule, we copy (1) to (4). We conjoin (1) with the guard to obtain (2) and with the negation of the guard to obtain (5). Note that, because the outer postcondition (6) does not syntactically match (5), we need a trivial use of the consequence rule from (5) to (6).
- Assertion (3) is the same as (4), because X does not appear in 4, so the substitution in the assignment rule is trivial.
- Finally, the implication between (2) and (3) is also trivial.
Definition reduce_to_zero' : com :=
WHILE BNot (BEq (AId X) (ANum 0)) DO
X ::= AMinus (AId X) (ANum 1)
END.
Theorem reduce_to_zero_correct' :
{{fun st ⇒ True}}
reduce_to_zero'
{{fun st ⇒ st X = 0}}.
Proof.
unfold reduce_to_zero'.
eapply hoare_consequence_post.
apply hoare_while.
Case "Loop body preserves invariant".
eapply hoare_consequence_pre. apply hoare_asgn.
intros st [HT Hbp]. unfold assn_sub. apply I.
Case "Invariant and negated guard imply postcondition".
intros st [Inv GuardFalse].
unfold bassn in GuardFalse. simpl in GuardFalse.
SearchAbout [not true].
rewrite not_true_iff_false in GuardFalse.
SearchAbout [negb false].
rewrite negb_false_iff in GuardFalse.
SearchAbout [beq_nat true].
apply beq_nat_true in GuardFalse.
apply GuardFalse. Qed.
Example: Division
Finding Loop Invariants
Example: Slow Subtraction
- (a) it must be weak enough to be implied by the loop's precondition, i.e. (1) must imply (2);
- (b) it must be strong enough to imply the loop's postcondition, i.e. (7) must imply (8);
- (c) it must be preserved by one iteration of the loop, i.e. (3) must imply (4).
Exercise: Slow Assignment
Exercise: 2 stars (slow_assignment)
A roundabout way of assigning a number currently stored in X to the variable Y is to start Y at 0, then decrement X until it hits 0, incrementing Y at each step. Here is a program that implements this idea: X = m Y ::= 0; WHILE X <> 0 DO X ::= X - 1; Y ::= Y + 1; END Y = m Write an informal decorated program showing that this is correct.
☐
Exercise: Slow Addition
Exercise: 3 stars, optional (add_slowly_decoration)
The following program adds the variable X into the variable Z by repeatedly decrementing X and incrementing Z. WHILE X <> 0 DO Z ::= Z + 1; X ::= X - 1 END
☐
Example: Parity
The postcondition does not hold at the beginning of the loop,
since m = parity m does not hold for an arbitrary m, so we
cannot use that as an invariant. To find an invariant that works,
let's think a bit about what this loop does. On each iteration it
decrements X by 2, which preserves the parity of X. So the
parity of X does not change, i.e. it is invariant. The initial
value of X is m, so the parity of X is always equal to the
parity of m. Using parity X = parity m as an invariant we
obtain the following decorated program:
X = m ->> (a - OK)
parity X = parity m
WHILE 2 <= X DO
parity X = parity m /\ 2 <= X ->> (c - OK)
parity (X-2) = parity m
X ::= X - 2
parity X = parity m
END
parity X = parity m /\ X < 2 ->> (b - OK)
X = parity m
With this invariant, conditions (a), (b), and (c) are all
satisfied. For verifying (b), we observe that, when X < 2, we
have parity X = X (we can easily see this in the definition of
parity). For verifying (c), we observe that, when 2 ≤ X, we
have parity X = parity (X-2).
Exercise: 3 stars, optional (parity_formal)
Translate this proof to Coq. Refer to the reduce-to-zero example for ideas. You may find the following two lemmas useful:Lemma parity_ge_2 : ∀ x,
2 ≤ x →
parity (x - 2) = parity x.
Proof.
induction x; intro. reflexivity.
destruct x. inversion H. inversion H1.
simpl. rewrite <- minus_n_O. reflexivity.
Qed.
Lemma parity_lt_2 : ∀ x,
¬ 2 ≤ x →
parity (x) = x.
Proof.
intros. induction x. reflexivity. destruct x. reflexivity.
apply ex_falso_quodlibet. apply H. omega.
Qed.
Theorem parity_correct : ∀ m,
{{ fun st ⇒ st X = m }}
WHILE BLe (ANum 2) (AId X) DO
X ::= AMinus (AId X) (ANum 2)
END
{{ fun st ⇒ st X = parity m }}.
Proof.
Admitted.
☐
Example: Finding Square Roots
Example: Squaring
Exercise: Factorial
Exercise: 3 stars (factorial)
Recall that n! denotes the factorial of n (i.e. n! = 1*2*...*n). Here is an Imp program that calculates the factorial of the number initially stored in the variable X and puts it in the variable Y: X = m ; Y ::= 1 WHILE X <> 0 DO Y ::= Y * X X ::= X - 1 END Y = m!Exercise: Min
Exercise: 3 stars (Min_Hoare)
Fill in valid decorations for the following program. For the => steps in your annotations, you may rely (silently) on the following facts about minExercise: 3 stars (two_loops)
Here is a very inefficient way of adding 3 numbers: X ::= 0; Y ::= 0; Z ::= c; WHILE X <> a DO X ::= X + 1; Z ::= Z + 1 END; WHILE Y <> b DO Y ::= Y + 1; Z ::= Z + 1 ENDExercise: Power Series
Exercise: 4 stars, optional (dpow2_down)
Here is a program that computes the series: 1 + 2 + 2^2 + ... + 2^m = 2^(m+1) - 1 X ::= 0; Y ::= 1; Z ::= 1; WHILE X <> m DO Z ::= 2 * Z; Y ::= Y + Z; X ::= X + 1; END Write a decorated program for this.Weakest Preconditions (Advanced)
That is, P is the weakest precondition of c for Q
if (a) P is a precondition for Q and c, and (b) P is the
weakest (easiest to satisfy) assertion that guarantees Q after
executing c.
2) ? X ::= Y + Z X = 5
3) ? X ::= Y X = Y
4) ?
IFB X == 0 THEN Y ::= Z + 1 ELSE Y ::= W + 2 FI
Y = 5
5) ?
X ::= 5
X = 0
6) ?
WHILE True DO X ::= 0 END
X = 0
Exercise: 1 star, optional (wp)
What are the weakest preconditions of the following commands for the following postconditions? 1) ? SKIP X = 5
☐
Exercise: 3 stars, advanced, optional (is_wp_formal)
Prove formally using the definition of hoare_triple that Y ≤ 4 is indeed the weakest precondition of X ::= Y + 1 with respect to postcondition X ≤ 5.Theorem is_wp_example :
is_wp (fun st ⇒ st Y ≤ 4)
(X ::= APlus (AId Y) (ANum 1)) (fun st ⇒ st X ≤ 5).
Proof.
Admitted.
☐
Exercise: 2 stars, advanced (hoare_asgn_weakest)
Show that the precondition in the rule hoare_asgn is in fact the weakest precondition.
☐
Exercise: 2 stars, advanced, optional (hoare_havoc_weakest)
Show that your havoc_pre rule from the himp_hoare exercise in the Hoare chapter returns the weakest precondition.
Module Himp2.
Import Himp.
Lemma hoare_havoc_weakest : ∀ (P Q : Assertion) (X : id),
{{ P }} HAVOC X {{ Q }} →
P ->> havoc_pre X Q.
Proof.
Admitted.
End Himp2.
Import Himp.
Lemma hoare_havoc_weakest : ∀ (P Q : Assertion) (X : id),
{{ P }} HAVOC X {{ Q }} →
P ->> havoc_pre X Q.
Proof.
Admitted.
End Himp2.
☐
Formal Decorated Programs (Advanced)
Syntax
Inductive dcom : Type :=
| DCSkip : Assertion → dcom
| DCSeq : dcom → dcom → dcom
| DCAsgn : id → aexp → Assertion → dcom
| DCIf : bexp → Assertion → dcom → Assertion → dcom
→ Assertion→ dcom
| DCWhile : bexp → Assertion → dcom → Assertion → dcom
| DCPre : Assertion → dcom → dcom
| DCPost : dcom → Assertion → dcom.
Tactic Notation "dcom_cases" tactic(first) ident(c) :=
first;
[ Case_aux c "Skip" | Case_aux c "Seq" | Case_aux c "Asgn"
| Case_aux c "If" | Case_aux c "While"
| Case_aux c "Pre" | Case_aux c "Post" ].
Notation "'SKIP' {{ P }}"
:= (DCSkip P)
(at level 10) : dcom_scope.
Notation "l '::=' a {{ P }}"
:= (DCAsgn l a P)
(at level 60, a at next level) : dcom_scope.
Notation "'WHILE' b 'DO' {{ Pbody }} d 'END' {{ Ppost }}"
:= (DCWhile b Pbody d Ppost)
(at level 80, right associativity) : dcom_scope.
Notation "'IFB' b 'THEN' {{ P }} d 'ELSE' {{ P' }} d' 'FI' {{ Q }}"
:= (DCIf b P d P' d' Q)
(at level 80, right associativity) : dcom_scope.
Notation "'->>' {{ P }} d"
:= (DCPre P d)
(at level 90, right associativity) : dcom_scope.
Notation "{{ P }} d"
:= (DCPre P d)
(at level 90) : dcom_scope.
Notation "d '->>' {{ P }}"
:= (DCPost d P)
(at level 80, right associativity) : dcom_scope.
Notation " d ;; d' "
:= (DCSeq d d')
(at level 80, right associativity) : dcom_scope.
Delimit Scope dcom_scope with dcom.
To avoid clashing with the existing Notation definitions
for ordinary commands, we introduce these notations in a special
scope called dcom_scope, and we wrap examples with the
declaration % dcom to signal that we want the notations to be
interpreted in this scope.
Careful readers will note that we've defined two notations for the
DCPre constructor, one with and one without a ->>. The
"without" version is intended to be used to supply the initial
precondition at the very top of the program.
Example dec_while : dcom := (
{{ fun st ⇒ True }}
WHILE (BNot (BEq (AId X) (ANum 0)))
DO
{{ fun st ⇒ True ∧ st X ≠ 0}}
X ::= (AMinus (AId X) (ANum 1))
{{ fun _ ⇒ True }}
END
{{ fun st ⇒ True ∧ st X = 0}} ->>
{{ fun st ⇒ st X = 0 }}
) % dcom.
Fixpoint extract (d:dcom) : com :=
match d with
| DCSkip _ ⇒ SKIP
| DCSeq d1 d2 ⇒ (extract d1 ;; extract d2)
| DCAsgn X a _ ⇒ X ::= a
| DCIf b _ d1 _ d2 _ ⇒ IFB b THEN extract d1 ELSE extract d2 FI
| DCWhile b _ d _ ⇒ WHILE b DO extract d END
| DCPre _ d ⇒ extract d
| DCPost d _ ⇒ extract d
end.
The choice of exactly where to put assertions in the definition of
dcom is a bit subtle. The simplest thing to do would be to
annotate every dcom with a precondition and postcondition. But
this would result in very verbose programs with a lot of repeated
annotations: for example, a program like SKIP;SKIP would have to
be annotated as
P (P SKIP P) ;; (P SKIP P) P,
with pre- and post-conditions on each SKIP, plus identical pre-
and post-conditions on the semicolon!
Instead, the rule we've followed is this:
In other words, the invariant of the representation is that a
dcom d together with a precondition P determines a Hoare
triple {{P}} (extract d) {{post d}}, where post is defined as
follows:
- The post-condition expected by each dcom d is embedded in d
- The pre-condition is supplied by the context.
Fixpoint post (d:dcom) : Assertion :=
match d with
| DCSkip P ⇒ P
| DCSeq d1 d2 ⇒ post d2
| DCAsgn X a Q ⇒ Q
| DCIf _ _ d1 _ d2 Q ⇒ Q
| DCWhile b Pbody c Ppost ⇒ Ppost
| DCPre _ d ⇒ post d
| DCPost c Q ⇒ Q
end.
Similarly, we can extract the "initial precondition" from a
decorated program.
Fixpoint pre (d:dcom) : Assertion :=
match d with
| DCSkip P ⇒ fun st ⇒ True
| DCSeq c1 c2 ⇒ pre c1
| DCAsgn X a Q ⇒ fun st ⇒ True
| DCIf _ _ t _ e _ ⇒ fun st ⇒ True
| DCWhile b Pbody c Ppost ⇒ fun st ⇒ True
| DCPre P c ⇒ P
| DCPost c Q ⇒ pre c
end.
This function is not doing anything sophisticated like calculating
a weakest precondition; it just recursively searches for an
explicit annotation at the very beginning of the program,
returning default answers for programs that lack an explicit
precondition (like a bare assignment or SKIP).
Using pre and post, and assuming that we adopt the convention
of always supplying an explicit precondition annotation at the
very beginning of our decorated programs, we can express what it
means for a decorated program to be correct as follows:
To check whether this Hoare triple is valid, we need a way to
extract the "proof obligations" from a decorated program. These
obligations are often called verification conditions, because
they are the facts that must be verified to see that the
decorations are logically consistent and thus add up to a complete
proof of correctness.
The function verification_conditions takes a dcom d together
with a precondition P and returns a proposition that, if it
can be proved, implies that the triple {{P}} (extract d) {{post d}}
is valid.
It does this by walking over d and generating a big
conjunction including all the "local checks" that we listed when
we described the informal rules for decorated programs. (Strictly
speaking, we need to massage the informal rules a little bit to
add some uses of the rule of consequence, but the correspondence
should be clear.)
Extracting Verification Conditions
Fixpoint verification_conditions (P : Assertion) (d:dcom) : Prop :=
match d with
| DCSkip Q ⇒
(P ->> Q)
| DCSeq d1 d2 ⇒
verification_conditions P d1
∧ verification_conditions (post d1) d2
| DCAsgn X a Q ⇒
(P ->> Q [X |-> a])
| DCIf b P1 d1 P2 d2 Q ⇒
((fun st ⇒ P st ∧ bassn b st) ->> P1)
∧ ((fun st ⇒ P st ∧ ¬ (bassn b st)) ->> P2)
∧ (Q <<->> post d1) ∧ (Q <<->> post d2)
∧ verification_conditions P1 d1
∧ verification_conditions P2 d2
| DCWhile b Pbody d Ppost ⇒
(P ->> post d)
∧ (Pbody <<->> (fun st ⇒ post d st ∧ bassn b st))
∧ (Ppost <<->> (fun st ⇒ post d st ∧ ~(bassn b st)))
∧ verification_conditions Pbody d
| DCPre P' d ⇒
(P ->> P') ∧ verification_conditions P' d
| DCPost d Q ⇒
verification_conditions P d ∧ (post d ->> Q)
end.
And now, the key theorem, which states that
verification_conditions does its job correctly. Not
surprisingly, we need to use each of the Hoare Logic rules at some
point in the proof. We have used in variants of several tactics before to
apply them to values in the context rather than the goal. An
extension of this idea is the syntax tactic in ×, which applies
tactic in the goal and every hypothesis in the context. We most
commonly use this facility in conjunction with the simpl tactic,
as below.
Theorem verification_correct : ∀ d P,
verification_conditions P d → {{P}} (extract d) {{post d}}.
Proof.
dcom_cases (induction d) Case; intros P H; simpl in ×.
Case "Skip".
eapply hoare_consequence_pre.
apply hoare_skip.
assumption.
Case "Seq".
inversion H as [H1 H2]. clear H.
eapply hoare_seq.
apply IHd2. apply H2.
apply IHd1. apply H1.
Case "Asgn".
eapply hoare_consequence_pre.
apply hoare_asgn.
assumption.
Case "If".
inversion H as [HPre1 [HPre2 [[Hd11 Hd12]
[[Hd21 Hd22] [HThen HElse]]]]].
clear H.
apply IHd1 in HThen. clear IHd1.
apply IHd2 in HElse. clear IHd2.
apply hoare_if.
eapply hoare_consequence_pre; eauto.
eapply hoare_consequence_post; eauto.
eapply hoare_consequence_pre; eauto.
eapply hoare_consequence_post; eauto.
Case "While".
inversion H as [Hpre [[Hbody1 Hbody2] [[Hpost1 Hpost2] Hd]]];
subst; clear H.
eapply hoare_consequence_pre; eauto.
eapply hoare_consequence_post; eauto.
apply hoare_while.
eapply hoare_consequence_pre; eauto.
Case "Pre".
inversion H as [HP Hd]; clear H.
eapply hoare_consequence_pre. apply IHd. apply Hd. assumption.
Case "Post".
inversion H as [Hd HQ]; clear H.
eapply hoare_consequence_post. apply IHd. apply Hd. assumption.
Qed.
Examples
==>
(((fun _ : state => True) ->> (fun _ : state => True)) /\
((fun _ : state => True) ->> (fun _ : state => True)) /\
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) =
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) /\
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) =
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) /\
(fun st : state => True /\ bassn (BNot (BEq (AId X) (ANum 0))) st) ->>
(fun _ : state => True) X |-> AMinus (AId X) (ANum 1)) /\
(fun st : state => True /\ ~ bassn (BNot (BEq (AId X) (ANum 0))) st) ->>
(fun st : state => st X = 0)
In principle, we could certainly work with them using just the
tactics we have so far, but we can make things much smoother with
a bit of automation. We first define a custom verify tactic
that applies splitting repeatedly to turn all the conjunctions
into separate subgoals and then uses omega and eauto (a handy
general-purpose automation tactic that we'll discuss in detail
later) to deal with as many of them as possible.
Lemma ble_nat_true_iff : ∀ n m : nat,
ble_nat n m = true ↔ n ≤ m.
Proof.
intros n m. split. apply ble_nat_true.
generalize dependent m. induction n; intros m H. reflexivity.
simpl. destruct m. inversion H.
apply le_S_n in H. apply IHn. assumption.
Qed.
Lemma ble_nat_false_iff : ∀ n m : nat,
ble_nat n m = false ↔ ~(n ≤ m).
Proof.
intros n m. split. apply ble_nat_false.
generalize dependent m. induction n; intros m H.
apply ex_falso_quodlibet. apply H. apply le_0_n.
simpl. destruct m. reflexivity.
apply IHn. intro Hc. apply H. apply le_n_S. assumption.
Qed.
Tactic Notation "verify" :=
apply verification_correct;
repeat split;
simpl; unfold assert_implies;
unfold bassn in *; unfold beval in *; unfold aeval in *;
unfold assn_sub; intros;
repeat rewrite update_eq;
repeat (rewrite update_neq; [| (intro X; inversion X)]);
simpl in *;
repeat match goal with [H : _ ∧ _ |- _] ⇒ destruct H end;
repeat rewrite not_true_iff_false in *;
repeat rewrite not_false_iff_true in *;
repeat rewrite negb_true_iff in *;
repeat rewrite negb_false_iff in *;
repeat rewrite beq_nat_true_iff in *;
repeat rewrite beq_nat_false_iff in *;
repeat rewrite ble_nat_true_iff in *;
repeat rewrite ble_nat_false_iff in *;
try subst;
repeat
match goal with
[st : state |- _] ⇒
match goal with
[H : st _ = _ |- _] ⇒ rewrite → H in *; clear H
| [H : _ = st _ |- _] ⇒ rewrite <- H in *; clear H
end
end;
try eauto; try omega.
What's left after verify does its thing is "just the interesting
parts" of checking that the decorations are correct. For very
simple examples verify immediately solves the goal (provided
that the annotations are correct).
Another example (formalizing a decorated program we've seen
before):
Example subtract_slowly_dec (m:nat) (p:nat) : dcom := (
{{ fun st ⇒ st X = m ∧ st Z = p }} ->>
{{ fun st ⇒ st Z - st X = p - m }}
WHILE BNot (BEq (AId X) (ANum 0))
DO {{ fun st ⇒ st Z - st X = p - m ∧ st X ≠ 0 }} ->>
{{ fun st ⇒ (st Z - 1) - (st X - 1) = p - m }}
Z ::= AMinus (AId Z) (ANum 1)
{{ fun st ⇒ st Z - (st X - 1) = p - m }} ;;
X ::= AMinus (AId X) (ANum 1)
{{ fun st ⇒ st Z - st X = p - m }}
END
{{ fun st ⇒ st Z - st X = p - m ∧ st X = 0 }} ->>
{{ fun st ⇒ st Z = p - m }}
) % dcom.
Theorem subtract_slowly_dec_correct : ∀ m p,
dec_correct (subtract_slowly_dec m p).
Proof. intros m p. verify. Qed.
Exercise: 3 stars, advanced (slow_assignment_dec)
Example slow_assignment_dec (m:nat) : dcom :=
admit.
Theorem slow_assignment_dec_correct : ∀ m,
dec_correct (slow_assignment_dec m).
Proof. Admitted.
☐
Exercise: 4 stars, advanced (factorial_dec)
Remember the factorial function we worked with before:
Following the pattern of subtract_slowly_dec, write a decorated
program that implements the factorial function and prove it
correct.
☐