Lecture notes 20210414 Denotations vs Triples 3

Require Import PL.Imp.
Import Assertion_D.
Import Abstract_Pretty_Printing.

Definition FOL_valid (P: Assertion): Prop :=
  J: Interp, JP.

Definition FOL_sound (T: FirstOrderLogic): Prop :=
  P: Assertion, FOL_provable P -> FOL_valid P.

Definition FOL_complete (T: FirstOrderLogic): Prop :=
  P: Assertion, FOL_valid P -> FOL_provable P.

Choices of Proof Rules

We have roved the soundness and completeness of Hoare logic, if its underlying assertion derivation logic is sound and complete.
Now it is time to ask: is the set of Hoare logic proof rules our unique choice? For example, can we choose the forward assignment rule instead of the backward one? Are other useful proof rules better candidates? In order to answer these questions, the key point is whether other proof rules also preserves Hoare triples' validity. We will show some preservation proof and demonstrate the relation between these candidates and our Hoare logic.
We assume that the underlying FOL is sound and complete.
Section HoareLogic.

Variable T: FirstOrderLogic.

Hypothesis T_sound: FOL_sound T.

Hypothesis T_complete: FOL_complete T.
A sound and complete FOL has some basic properties.
Theorem TrivialFOL_complete_der: P Q,
  FOL_valid (P IMPLY Q) ->
  P  Q.
Proof.
  intros.
  apply T_complete, H.
Qed.

Theorem TrivialFOL_complete_spec: P Q,
  (st La, (st, La) ⊨ P -> (st, La) ⊨ Q) ->
  P  Q.
Proof.
  intros.
  apply TrivialFOL_complete_der.
  intros [st La].
  specialize (H st La).
  simpl.
  tauto.
Qed.

Theorem TrivialFOL_sound_der: P Q,
  P  Q ->
  FOL_valid (P IMPLY Q).
Proof.
  intros.
  apply T_sound, H.
Qed.

Theorem derives_refl: P, P  P.
Proof.
  intros.
  apply TrivialFOL_complete_der.
  unfold FOL_valid.
  intros.
  simpl.
  tauto.
Qed.

Theorem AND_derives: P1 Q1 P2 Q2,
  P1  P2 ->
  Q1  Q2 ->
  P1 AND Q1  P2 AND Q2.
Proof.
  intros.
  apply TrivialFOL_complete_der.
  apply TrivialFOL_sound_der in H.
  apply TrivialFOL_sound_der in H0.
  unfold FOL_valid.
  unfold FOL_valid in H.
  unfold FOL_valid in H0.
  intros.
  specialize (H J).
  specialize (H0 J).
  simpl.
  simpl in H.
  simpl in H0.
  tauto.
Qed.

Soundness of the forward assignment rule

We need some additional properties about syntactic substitution in our proof.
Check aexp_sub_spec.

(* aexp_sub_spec:
  forall st1 st2 La (a: aexp') (X: var) (E: aexp'),
  st2 X = aexp'_denote (st1, La) E ->
  (forall Y : var, X <> Y -> st1 Y = st2 Y) ->
  aexp'_denote (st1, La) (a X E) = aexp'_denote (st2, La) a. *)


Check no_occ_satisfies.

(* no_occ_satisfies: forall st La P x v,
  assn_free_occur x P = O ->
  ((st, La) ⊨ P <-> (st, Lassn_update La x v) ⊨ P). *)


Check Assertion_sub_spec.

(* Assertion_sub_spec: forall st1 st2 La (P: Assertion) (X: var) (E: aexp'),
  st2 X = aexp'_denote (st1, La) E ->
  (forall Y : var, X <> Y -> st1 Y = st2 Y) ->
  ((st1, La) ⊨ P X E) <-> ((st2, La) ⊨ P). *)

The soundness proof is straightforwad.
Lemma hoare_asgn_fwd_sound : P (X: var) (x: logical_var) (E: aexp),
  assn_free_occur x P = O ->
  ⊨  {{ P }
        X ::= E
       {{ EXISTS x, P [Xx] AND
                   [[X]] = [[ E [Xx] ]] }} .
Proof.
  unfold valid.
  intros.
  simpl in H1.
  destruct H1.
  pose proof aeval_aexp'_denote st1 La E.
  simpl.
  (st1 X).
  assert (Y : var, XY -> st2 Y = st1 Y).
  {
    intros.
    rewrite H2 by tauto.
    reflexivity.
  }
  clear H2; rename H4 into H2.
  split.
  + unfold Interp_Lupdate.
    simpl.
    apply Assertion_sub_spec with st1.
    - simpl.
      unfold Lassn_update.
      destruct (Nat.eq_dec x x).
      * reflexivity.
      * exfalso; apply n; reflexivity.
    - exact H2.
    - apply no_occ_satisfies.
      * exact H.
      * exact H0.
  + unfold Interp_Lupdate; simpl.
    erewrite aexp_sub_spec; [| | exact H2].
    - rewrite <- aeval_aexp'_denote in H3.
      rewrite <- aeval_aexp'_denote.
      exact H1.
    - simpl.
      unfold Lassn_update.
      destruct (Nat.eq_dec x x).
      * reflexivity.
      * exfalso; apply n; reflexivity.
Qed.

Soundness of sequential composition's associativity

We first prove that (c1;;c2);;c3 is equivalent with c1;;(c2;;c3) via the denotational semantics.
Lemma Rel_concat_assoc: R1 R2 R3: state -> state -> Prop,
  BinRel.equiv
    (BinRel.concat (BinRel.concat R1 R2) R3)
    (BinRel.concat R1 (BinRel.concat R2 R3)).
Proof.
  unfold BinRel.equiv, BinRel.concat.
  intros; split; intros.
  + destruct H as [b' [[a' [? ?]] ?]].
    a'.
    split; [tauto |].
    b'.
    tauto.
  + destruct H as [a' [? [b' [? ?]]]].
    b'.
    split; [| tauto].
    a'.
    tauto.
Qed.

Lemma seq_assoc : c1 c2 c3,
  com_equiv ((c1;;c2);;c3) (c1;;(c2;;c3)).
Proof.
  intros.
  unfold com_equiv.
  rewrite ! ceval_CSeq.
  apply Rel_concat_assoc.
Qed.
Based on this, we can prove its Hoare logic counterpart sound.
Lemma seq_assoc_sound : P c1 c2 c3 Q,
  ⊨  {{ P }c1 ;; c2 ;; c3  {{ Q }}  ↔
  ⊨  {{ P }}  (c1 ;; c2) ;; c3  {{ Q }} .
Proof.
  unfold valid.
  intros.
  pose proof seq_assoc c1 c2 c3.
  unfold com_equiv, BinRel.equiv in H.
  split; intros.
  + specialize (H0 La st1 st2).
    apply H0.
    - exact H1.
    - apply H.
      exact H2.
  + specialize (H0 La st1 st2).
    apply H0.
    - exact H1.
    - apply H.
      exact H2.
Qed.

Deriving single-sided consequence rules

Recall that if a proof rule can be derived from primary rules, it is called a derived rule. For example, we can derive the single side consequence rule from the two sided version. Remark: TrivialFOL is implicitly claimed as the underlying logic for assertion derivation here. Coq does this automatically because FirstOrderLogic is a type class and TrivialFOL is one of its instances. Coq does such auto filling for all type classes' instances.
Lemma hoare_consequence_pre: P P' c Q,
  P  P' ->
    {{ P' }c  {{ Q }}  ->
    {{ P }c  {{ Q }} .
Proof.
  intros.
  eapply hoare_consequence.
  + exact H.
  + exact H0.
  + apply derives_refl.
Here, we use the fact that the underlying FOL is TrivialFOL.
Qed.

Lemma hoare_consequence_post: P c Q Q',
    {{ P }c  {{ Q' }}  ->
  Q'  Q ->
    {{ P }c  {{ Q }} .
Proof.
  intros.
  eapply hoare_consequence.
  + apply derives_refl.
  + exact H.
  + exact H0.
Qed.

Deriving the forward assignment rule

Now we try to derive the forward assigement rule from our Hoare logic. Our proof needs the following program state construction.
Print state_update.
The following statement reminds us that we are proving things about a logic but not proving things using a Hoare logic. When we use a Hoare logic to prove program correctness, we simply use Coq variables to represent logical variables and use Coq's integer terms to represent integer constants. Now, we established a formal definition of syntax trees with all of these features. Thus, we need to add extra assumptions like "x does not freely occur in P" to ensure that the existentially quantified variable x only appears in the scope of that existential quantifier.
Lemma hoare_asgn_fwd_der : P (X: var) (x: logical_var) (E: aexp),
  assn_free_occur x P = O ->
    {{ P }
        X ::= E
       {{ EXISTS x, P [Xx] AND
                   [[X]] = [[ E [Xx] ]] }} .
Proof.
  intros.
  eapply hoare_consequence_pre; [| apply hoare_asgn_bwd].
In short, the forward assignment rule can be derived by a combination of the backward assignment rule and the consequence rule. To complete the proof, we need to prove this assertion derivation.
  apply TrivialFOL_complete_spec.
  intros.
After these lines of transformation, we only need to prove that: as long as P is satisfied,
    (EXISTS xP [X ⟼ xAND [[X]] == [[E [X ⟼ x]]]) [X ⟼ E]
is satisfied.
  pose proof state_update_spec st X (aeval E st).
  destruct H1.
  apply Assertion_sub_spec with (state_update st X (aeval E st)).
  { rewrite <- aeval_aexp'_denote. exact H1. }
  { exact H2. }
  (** Here, we turn syntactic substitution into program state
  update using Assertion_sub_spec. *)

  simpl.
This simpl unfolds the semantic definition of EXISTS, AND and equality in the assertion language.
  (st X).
  pose proof Lassn_update_spec La x (st X).
  destruct H3.
  split.
  + unfold Interp_Lupdate; simpl.
Again, in order to prove that P[X x] is satisfied, we only need to prove that P is satisfied on a modified program state.
    apply Assertion_sub_spec with st.
    { simpl. rewrite H3. reflexivity. }
    { intros. specialize (H2 _ H5). rewrite H2; reflexivity. }
    clear H1 H2 H3 H4.
Now, we want to prove the conclusion from H0. This is easy since we know that x does not freely occur in P and two interpretations in H0 and the conclusion only differ in x's value.
    apply no_occ_satisfies; tauto.
  + rewrite H1.
    unfold Interp_Lupdate; simpl.
Now the equation's right hand side is the denotation of E [X x], it is equivalent with E's denotation on a modified program state. This property is described by aexp_sub_spec.
    assert (aexp'_denote
             (state_update st X (aeval E st), Lassn_update La x (st X))
             (E [Xx]) =
            aexp'_denote (st, Lassn_update La x (st X)) E).
    {
      apply aexp_sub_spec.
      { simpl. rewrite H3. reflexivity. }
      { intros. specialize (H2 _ H5). rewrite H2. reflexivity. }
    }
    rewrite H5.
Then, the residue proof goal is trivial.
    rewrite <- aeval_aexp'_denote.
    reflexivity.
Qed.

Inversion of Sequence Rule

When we derive hoare_consequence_pre, we actually prove such a meta theorem:
    If  {P' }}  c  {Q }}  is provable and P'  ⊢ P
    then  {P }}  c  {Q }}  is also provable.
In other words, we prove that if there is a proof tree for the former Hoare triple, we can always construct another proof tree for the latter one. Here is a brief illustration:
    Assumption:

      *- - - - - - - - −∗
      |                 |
      | Some Proof Tree |
      |                 |
      *- - - - - - - - −∗
       {{ P' }}  c  {{ Q }}           P'  ⊢ P


    Conclusion:

    *- - - - - - - - - - - - - - - - - - - - - - - - - - - - −∗
    |                                                         |
    |  *- - - - - - - - −∗           New Proof Tree           |
    |  |                 |                                    |
    |  | Some Proof Tree |                                    |
    |  |                 |                                    |
    |  *- - - - - - - - −∗                       -----------  |
    |   {{ P' }}  c  {{ Q }}            P'  ⊢ P       Q  ⊢ Q    |
    | ------------------------------------------------------  |
    |                    {{ P }}  c  {{ Q }}                      |
    |                                                         |
    *- - - - - - - - - - - - - - - - - - - - - - - - - - - - −∗
Beside such proof tree construction, we can say something more.
Lemma hoare_seq_inv: P c1 c2 R,
    {{ P }c1 ;; c2  {{ R }}  ->
  Q, (  {{ P }c1  {{ Q }} ) ∧ (  {{ Q }c2  {{ R }} ).
This lemma says, if  {{ P } c1;; c2  {{ R } is provable, then we can always find a middle condition Q. It is worth noticing that the proof tree for  {{ P } c1;; c2  {{ R } does not necessarily have the following form:
    *- - - - - - - - - - - - - - - - - - - - - - - - - - - - −∗
    |                                                         |
    |  *- - - - - - - - −∗         *- - - - - - - - −∗        |
    |  |                 |         |                 |        |
    |  | Some Proof Tree |         | Some Proof Tree |        |
    |  |                 |         |                 |        |
    |  *- - - - - - - - −∗         *- - - - - - - - −∗        |
    |   {{ P }}  c1  {{ Q } {{ Q }}  c2  {{ R }}         |
    | ------------------------------------------------------  |
    |                {{ P }}  c1;; c2  {{ Q }}                    |
    |                                                         |
    *- - - - - - - - - - - - - - - - - - - - - - - - - - - - −∗
because the last step in the proof might not be hoare_seq. It can also be hoare_consequence. This lemma says: even if the last step in the proof is not hoare_seq, we can always find such an assertion Q and reconstruct proof trees for  {{ P } c1  {{ Q } and  {{ Q } c2  {{ R } .
Proof.
  intros.
  remember ( {{P}c1;; c2  {{R}} ) as Tr.
  revert P c1 c2 R HeqTr; induction H; intros.
  + injection HeqTr as ?H ?H ?H; subst.
    Q.
    tauto.
  + discriminate HeqTr.
  + discriminate HeqTr.
  + discriminate HeqTr.
  + discriminate HeqTr.
  + injection HeqTr as ?H ?H ?H; subst.
    assert (( {{P'}c1;; c2  {{Q'}} ) = ( {{P'}c1;; c2  {{Q'}} )) by reflexivity.
    specialize (IHprovable P' c1 c2 Q' H2); clear H2.
    destruct IHprovable as [Q [? ?]].
    Q.
    split.
    - eapply hoare_consequence_pre.
      * exact H.
      * exact H2.
    - eapply hoare_consequence_post.
      * exact H3.
      * exact H1.
Qed.

Associativity

The following lemma is more interesting. It says: we can always rebuild a proof tree for  {{ P } (c1 ;; c2) ;; c3  {{ Q } if we discover the internal structure of a  {{ P } c1 ;; c2 ;; c3  {{ Q } 's proof tree (and vise versa).
Lemma seq_assoc_der : P c1 c2 c3 Q,
    {{ P }c1 ;; c2 ;; c3  {{ Q }}  ↔
    {{ P }}  (c1 ;; c2) ;; c3  {{ Q }} .
Proof.
  intros.
  split; intros.
  + apply hoare_seq_inv in H.
    destruct H as [P1 [? ?]].
    apply hoare_seq_inv in H0.
    destruct H0 as [P2 [? ?]].
    apply hoare_seq with P2.
    - apply hoare_seq with P1.
      * exact H.
      * exact H0.
    - exact H1.
  + apply hoare_seq_inv in H.
    destruct H as [P2 [? ?]].
    apply hoare_seq_inv in H.
    destruct H as [P1 [? ?]].
    apply hoare_seq with P1.
    - exact H.
    - apply hoare_seq with P2.
      * exact H1.
      * exact H0.
Qed.

If And Sequence

Very similarly, we can prove the following facts about Hoare logic proof trees for if-commands.
Lemma hoare_if_inv: P b c1 c2 Q,
    {{P}If b Then c1 Else c2 EndIf  {{Q}}  ->
  (  {{ P AND [[b]] }c1  {{Q}} ) ∧
  (  {{ P AND NOT [[b]] }c2  {{Q}} ).
Proof.
  intros.
  remember ( {{P}If b Then c1 Else c2 EndIf  {{Q}} ) as Tr.
  revert P b c1 c2 Q HeqTr; induction H; intros.
  + discriminate HeqTr.
  + discriminate HeqTr.
  + injection HeqTr as ? ? ? ? ?; subst.
    clear IHprovable1 IHprovable2.
    tauto.
  + discriminate HeqTr.
  + discriminate HeqTr.
  + injection HeqTr as ? ? ?; subst.
    assert ( {{P'}If b Then c1 Else c2 EndIf  {{Q'}}  =
             {{P'}If b Then c1 Else c2 EndIf  {{Q'}} ).
    { reflexivity. }
    pose proof IHprovable _ _ _ _ _ H2; clear IHprovable H2.
    destruct H3.
    split.
    - eapply hoare_consequence.
      * apply AND_derives.
        { exact H. }
        { apply derives_refl. }
      * apply H2.
      * apply H1.
    - eapply hoare_consequence.
      * apply AND_derives.
        { exact H. }
        { apply derives_refl. }
      * apply H3.
      * apply H1.
Qed.

Lemma if_seq_der : P b c1 c2 c3 Q,
    {{ P }If b Then c1 Else c2 EndIf;; c3  {{ Q }}  ->
    {{ P }If b Then c1;; c3 Else c2;; c3 EndIf  {{ Q }} .
Proof.
  intros.
  apply hoare_seq_inv in H.
  destruct H as [Q' [? ?]].
  apply hoare_if_inv in H.
  destruct H.
  apply hoare_if.
  + apply hoare_seq with Q'.
    - exact H.
    - exact H0.
  + apply hoare_seq with Q'.
    - exact H1.
    - exact H0.
Qed.

End HoareLogic.

Hoare Logic's Completeness and Weakest Precondition

A Hoare logic is complete if all valid Hoare triples are provable.
Definition hoare_complete (T: FirstOrderLogic): Prop :=
  P c Q,
    ⊨  {{ P }c  {{ Q }}  ->
      {{ P }c  {{ Q }} .
The general idea of proving Hoare logic completeness is to prove those Hoare triples with weakest preconditions are provable. Specifically, an assertion P is called the weakest precondition of c and Q if Hoare triple
         {{P}}  c  {{Q}
is valid and for any other P', if  {{P'} c  {{Q} is valid, then P'   P) . If the logic for assertion derivation is sound and complete, it is equivalent to say: for any interpretation (st, La) , if (st, La) P' , then (st, La) P .
Definition wp' (P: Assertion) (c: com) (Q: Assertion): Prop :=
  (⊨  {{ P }c  {{ Q }} ) ∧
  (P', (⊨  {{ P' }c  {{ Q }} ) ->
     st La, (st, La) ⊨ P' -> (st, La) ⊨ P).
Although this definition above is very natural, we can actually be more specific on what this weakest precondition should be.
Definition wp (P: Assertion) (c: com) (Q: Assertion): Prop :=
  st La,
    (st, La) ⊨ P
    (st', ceval c st st' -> (st', La) ⊨ Q).
This definition says: a beginning state st satisfies P if and only if all possible ending states of executing c will satisfy Q. This definition directly defines what interpreations satisfy P. Of course, it is consistent will our previous definition.
Lemma wp_wp': P c Q,
  wp P c Q -> wp' P c Q.
Proof.
  intros.
  unfold wp in H; unfold wp'.
  split.
  + unfold valid.
    intros.
    specialize (H st1 La).
    firstorder.
  + intros.
    rewrite H.
    unfold valid in H0.
    specialize (H0 La st).
    pose proof classic ((st, La) ⊨ P').
    firstorder.
Qed.
Hoare logic's completeness proof needs two important lemmas:
  • expressiveness: for any c and Q, there is an assertion to express the weakest precondition of c and Q;
  • if P expresses the weakest precondition of c and Q, then  {{ P } c  {{ Q } is provable.
Here are their formal statement.
Definition expressive: Prop :=
  c Q, P, wp P c Q.

Definition wp_provable (T: FirstOrderLogic): Prop :=
  P c Q, wp P c Q ->   {{ P }c  {{ Q }} .
We leave expressiveness' proof in additional reading material. We only prove that if all weakest preconditions exist, then the triples with weakest preconditions are provable.

Establishing Triples With Weakest Precondtions

In this part, we prove that if P is c and Q's weakest precondition, then the Hoare triple is provable. Noticing that the assertion P is not necessarily the weakest precondition constructed by the expressiveness lemma, we argue about all possible triples such that wp P c Q holds.
Again, we prove this lemma by induction over the syntax tree of c.

If c = Skip

If P is a weakest precondition of c and Q, we know that P is actually equivalent with Q. Thus, P IMPLY Q is a valid first order proposition. By the assertion derivation logic's completeness, we know that P   Q which is immediately followed by    {{ P } c  {{ Q } according to the consequence rule.

If c = X ::= E

The proof is similar.

If c = c1 ;; c2

The proof is similar.

If c = If b Then c1 Else c2 EndIf

The proof is similar.

If c = While b Do c1 EndWhile

This is the only interesting case! Suppose wp P c Q. We know:
    for any st and La,
      (stLa) ⊨ P if and only if
      for any st'if (ceval c st st'), then (st'La) ⊨ Q.
Now, consider the weakest precondition of c1 and P. We claim that P AND [[ b ]] is stronger than weakest preconditions of c1 and P.
For fixed st and La, if
    (stLa) ⊨ P AND [b ]]
then for any st'
    if (ceval c st st'), then (st'La) ⊨ Q.
Since b is true on st, we know that for any st' and st'',
    if (ceval c1 st st'and (ceval c st' st''), then (st''La) ⊨ Q.
This is equivalent to say: for any st',
    if (ceval c1 st st'then for any st'',
       if (ceval c st' st''), then (st''La) ⊨ Q.
By the fact that wp P c Q, we conclude that for any st',
    if (ceval c1 st st'then (st'La) ⊨ P.
Now, suppose P' is a weakest precondtion of c and P (by expressiveness lemma, it must exist). What we just proved means that (P AND [[ b ]]) IMPLY P' is a valid assertion. By the first order logic's completeness,
    P AND [[b] ⊢ P'.
By induction hypothesis:
     ⊢  {P' }}  c1  {P }} .
Then by the consequence rule:
     ⊢  {P AND [b ]}}  c1  {P }} .
Thus,
     ⊢  {P }}  c  {P AND NOT [b ]}} .
At the same time, due to wp P c Q, it is obvious that if (st, La) P AND NOT [[ b ]] on some interpretation (st, La), then (st, La) Q. In other words, (P AND NOT [[ b ]]) IMPLY Q is valid. Thus,
    P AND NOT [b ] ⊢ Q.
So,
     ⊢  {P }}  c  {Q }} .
This proves that all triples with weakest preconditions are provable.
(* 2021-04-14 07:19 *)