This section is a tutorial introducing the most important concepts of mvcgen top-down.
Recall that you need to import Std.Tactic.Do and open Std.Do to run these examples.
Array Sum
As a “hello world” to mvcgen, the following example mySum computes the sum of an array using local mutability and a for loop.
Then it proves mySum correct relative to Array.sum, requiring us to specify an invariant for the loop:
defmySum(l:ArrayNat):Nat:=Id.rundoletmutout:=0foriinldoout:=out+ireturnouttheoremmySum_correct(l:ArrayNat):mySuml=l.sum:=l:ArrayNat⊢ mySuml=l.sum
-- Focus on the part of the program with the `do` block (`Id.run ...`)
l:ArrayNatx:Nath:mySuml=x⊢ x=l.suml:ArrayNatx:Nath:mySuml=x⊢ ⊢ₛwp⟦haveout:=0;doletr←forInloutfunir=>haveout:=r;haveout:=out+i;dopurePUnit.unitpure(ForInStep.yieldout)haveout:Nat:=rpureout⟧(PostCond.noThrowfuna=>{down:=a=l.sum})
-- Break down into verification conditions
l:ArrayNatx:Nath:mySuml=x⊢ Invariantl.toListNatPostShape.purel:ArrayNatx:Nath:mySuml=xpref✝:ListNatcur✝:Natsuff✝:ListNath✝¹:l.toList=pref✝++cur✝::suff✝b✝:Nath✝:(Prod.fst?inv1({«prefix»:=pref✝,suffix:=cur✝::suff✝,property:=⋯},b✝)).down⊢ (Prod.fst?inv1({«prefix»:=pref✝++[cur✝],suffix:=suff✝,property:=⋯},b✝+cur✝)).downl:ArrayNatx:Nath:mySuml=x⊢ (Prod.fst?inv1({«prefix»:=[],suffix:=l.toList,property:=⋯},0)).downl:ArrayNatx:Nath:mySuml=xr✝:Nath✝:(Prod.fst?inv1({«prefix»:=l.toList,suffix:=[],property:=⋯},r✝)).down⊢ r✝=l.sum
-- Specify the invariant which should hold throughout the loop
-- * `out` refers to the current value of the `let mut` variable
-- * `xs` is a `List.Cursor`, which is a data structure representing
-- a list that is split into `xs.prefix` and `xs.suffix`.
-- It tracks how far into the loop we have gotten.
-- Our invariant is that `out` holds the sum of the prefix.
-- The notation ⌜p⌝ embeds a `p : Prop` into the assertion language.
caseinv1l:ArrayNatx:Nath:mySuml=x⊢ Invariantl.toListNatPostShape.pureAll goals completed! 🐙
-- After specifying the invariant, we can further simplify our goals
-- by "leaving the proof mode". `mleave` is just
-- `simp only [...] at *` with a stable simp subset.
all_goalsl:ArrayNatx:Nath:mySuml=xr✝:Nath✝:l.toList.sum=r✝⊢ r✝=l.sum
-- Prove that our invariant is preserved at each step of the loop
casevc1ihl:ArrayNatx:Nath:mySuml=xpref✝:ListNatcur✝:Natsuff✝:ListNath✝:l.toList=pref✝++cur✝::suff✝b✝:Natih:pref✝.sum=b✝⊢ (pref✝++[cur✝]).sum=b✝+cur✝
-- The goal here mentions `pref`, which binds the `prefix` field of
-- the cursor passed to the invariant. Unpacking the
-- (dependently-typed) cursor makes it easier for `grind`.
All goals completed! 🐙
-- Prove that the invariant is true at the start
casevc2l:ArrayNatx:Nath:mySuml=x⊢ [].sum=0All goals completed! 🐙
-- Prove that the invariant at the end of the loop implies the
-- property we wanted
casevc3hl:ArrayNatx:Nath✝:mySuml=xr✝:Nath:l.toList.sum=r✝⊢ r✝=l.sumAll goals completed! 🐙
Note that the case labels are actually unique prefixes of the full case labels.
Whenever referring to cases, only this prefix should be used; the suffix is merely a poor man's hint to the user where that particular VC came from.
For example,
vc1.step conveys that this VC proves the inductive step for the loop
vc2.a.pre is meant to prove that the hypotheses of a goal implies the precondition of a specification (of forIn).
vc3.a.post is meant to prove that the postcondition of a specification (of forIn) implies the target of the goal.
After specifying the loop invariant, the proof can be shortened to just all_goals mleave; grind (cf. mleave).
The mvcgen invariants ... with ... expands to the same syntax as the
tactic sequence mvcgen; case inv1 => ...; all_goals mleave; grind
above. It is the form that we will be using from now on.
It is helpful to compare the proof of mySum_correct_shorter to a traditional correctness proof:
theoremmySum_correct_vanilla(l:ArrayNat):mySuml=l.sum:=l:ArrayNat⊢ mySuml=l.sum
-- Turn the array into a list
caseslwithl:ListNat⊢ mySum{toList:=l}={toList:=l}.sum
-- Unfold `mySum` and rewrite `forIn` to `foldl`
l:ListNat⊢ List.foldl(funba=>b+a)0l=l.sum
-- Generalize the inductive hypothesis
sufficesh:∀out,List.foldl(·+·)outl=out+l.suml:ListNath:∀(out:Nat),List.foldl(funx1x2=>x1+x2)out_fvar.53=out+List.sum_fvar.53:=?_mvar.2772⊢ List.foldl(funba=>b+a)0l=l.sumAll goals completed! 🐙
-- Grind away
inductionlwithAll goals completed! 🐙
This proof is similarly succinct as the proof in mySum_correct_shorter using mvcgen.
However, the traditional approach relies on important properties of the program:
The for loop does not break or return early. Otherwise, the forIn could not be rewritten to a foldl.
The loop body (· + ·) is small enough to be repeated in the proof.
The loop body is pure, that is, does not perform any effects.
(Indeed, all computations in the base monad Id factor through pure.)
While forIn could still be rewritten to a foldlM, reasoning about the monadic loop body can be tough for grind.
In the following sections, we will go through several examples to learn about mvcgen and its support library, and also see where traditional proofs become difficult.
This is usually caused by
do blocks using control flow constructs such as for loops, breaks and early return.
Effectful computations in non-Id monads.
Such computations affect the implicit monadic context (state, exceptions) in ways that need to be reflected in loop invariants.
mvcgen scales to these challenges with reasonable effort.
18.1.1. Control flow
Let us consider another example that combines for loops with an early return.
Detecting Duplicates in a List
Recall that List.Nodup is a predicate that asserts that a given list does not contain any duplicates.
The function nodup below decides this predicate:
The proof has the same succinct structure as for the initial mySum example, because we again offload all proofs to grind and its existing framework around List.Nodup.
Therefore, the only difference is in the invariant.
Since our loop has an early return, we construct the invariant using the helper function Invariant.withEarlyReturn.
This function allows us to specify the invariant in three parts:
onReturn ret seen holds after the loop was left through an early return with value ret.
In case of nodup, the only value that is ever returned is false, in which case nodup has decided there is a duplicate in the list.
onContinue xs seen is the regular induction step that proves the invariant is preserved each loop iteration.
The iteration state is captured by the cursor xs.
The given example asserts that the set seen contains all the elements of previous loop iterations and asserts that there were no duplicates so far.
onExcept must hold when the loop throws an exception.
There are no exceptions in Id, so we leave it unspecified to use the default.
(Exceptions will be discussed at a later point.)
Now consider the following direct (and excessively golfed) proof without mvcgen:
The proof is even shorter than the one with mvcgen.
The use of generalize to generalize the accumulator relies on there being exactly one occurrence of ∅ to generalize. If that were not the case, we would have to copy parts of the program into the proof. This is a no go for larger functions.
grind splits along the control flow of the function and reasons about Id, given the right lemmas.
While this works for Id.run_pure and Id.run_bind, it would not work for Id.run_seq, for example, because that lemma is not E-matchable. If grind would fail, we would be forced to do all the control flow splitting and monadic reasoning by hand until grind can pick up again.
The usual way to avoid replicating the control flow of a definition under proof is to use the fun_cases or fun_induction tactics.
Unfortunately, fun_cases does not help with control flow inside a forIn application, for example.
This is in contrast to mvcgen, which ships with support for many forIn implementations and is easily extended with @[spec] annotations for custom forIn implementations.
Furthermore, a mvcgen-powered proof will never need to copy any part of the original program.
18.1.2. Compositional reasoning about effectful programs using Hoare triples
The previous examples reasoned about functions defined using Id.run do <prog> to make use of local mutability and early return in <prog>.
However, real-world programs often use do notation and monads M to hide away state and failure conditions as implicit “effects”.
In this use case, functions usually omit the M.run.
Instead they have monadic return type M α and compose well with other functions of that return type.
Generating Fresh Numbers
Here is an example involving a stateful function mkFresh that returns auto-incremented counter values:
mkFreshN n returns n “fresh” numbers, modifying the internal Supply state through mkFresh.
Here, “fresh” refers to all previously generated numbers being distinct from the next generated number.
We can formulate and prove a correctness property mkFreshN_correct in terms of List.Nodup:
theoremmkFreshN_correct(n:Nat):((mkFreshNn).run's).Nodup:=s:Supplyn:Nat⊢ List.Nodup(StateT.run'(mkFreshNn)s)
-- Focus on `(mkFreshN n).run' s`.
s:Supplyn:Natx:Id(ListNat)h:StateT.run'(mkFreshNn)s=x⊢ List.Nodupxs:Supplyn:Natx:Id(ListNat)h:StateT.run'(mkFreshNn)s=x⊢ ⊢ₛwp⟦mkFreshNn⟧(PostCond.noThrowfuna=>⌜a.Nodup⌝)s
-- Show something about monadic program `mkFresh n`.
-- The `mkFreshN` and `mkFresh` arguments to `mvcgen` add to an
-- internal `simp` set and makes `mvcgen` unfold these definitions.
mvcgen[mkFreshN,mkFresh]invariants
-- Invariant: The counter is larger than any accumulated number,
-- and all accumulated numbers are distinct.
-- Note that the invariant may refer to the state through function
-- argument `state : Supply`. Since the next number to accumulate is
-- the counter, it is distinct to all accumulated numbers.
·⇓⟨xs,acc⟩state=>⌜(∀x∈acc,x<state.counter)∧acc.toList.Nodup⌝withAll goals completed! 🐙
18.1.3. Composing specifications
Nested unfolding of definitions as in mvcgen [mkFreshN, mkFresh] is quite blunt but effective for small programs.
A more compositional way is to develop individual specification lemmas for each monadic function.
These can either be passed as arguments to mvcgen or registered in a global (or scoped, or local) database of specifications using the @[spec] attribute:
Decomposing a Proof into Specifications
@[spec]theoremmkFresh_spec(c:Nat):⦃funstate=>⌜state.counter=c⌝⦄mkFresh⦃⇓rstate=>⌜r=c∧c<state.counter⌝⦄:=c:Nat⊢ ⦃funstate=>⌜state.counter=c⌝⦄mkFresh⦃PostCond.noThrowfunrstate=>⌜r=c∧c<state.counter⌝⦄
-- Unfold `mkFresh` and blast away:
mvcgen[mkFresh]withAll goals completed! 🐙@[spec]theoremmkFreshN_spec(n:Nat):⦃⌜True⌝⦄mkFreshNn⦃⇓r=>⌜r.Nodup⌝⦄:=n:Nat⊢ ⦃⌜True⌝⦄mkFreshNn⦃PostCond.noThrowfunr=>⌜r.Nodup⌝⦄
-- `mvcgen [mkFreshN, mkFresh_spec]` if `mkFresh_spec` were not
-- registered with `@[spec]`
mvcgen[mkFreshN]invariants
-- As before:
·⇓⟨xs,acc⟩state=>⌜(∀x∈acc,x<state.counter)∧acc.toList.Nodup⌝withAll goals completed! 🐙theoremmkFreshN_correct_compositional(n:Nat):((mkFreshNn).run's).Nodup:=
-- The type `⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄` of `mkFreshN` is
-- definitionally equal to
-- `∀ (n : Nat) (s : Supply), True → ((mkFreshN n).run' s).Nodup`
mkFreshN_specnsTrue.intro
Here, the notation ⦃funstate=>⌜state.counter=c⌝⦄mkFresh⦃⇓rstate=>⌜r=c∧c<state.counter⌝⦄ denotes a Hoare triple (cf. Std.Do.Triple).
Specifications for monadic functions always have such a Hoare triple result type.
A Hoare triple for reasoning about monadic programs.
A proof for TriplexPQ is a specification for x:
If assertion P holds before x, then postcondition Q holds after running x.
⦃P⦄x⦃Q⦄ is convenient syntax for TriplexPQ.
syntaxHoare triple
term::= ...
|A Hoare triple for reasoning about monadic programs.
A proof for `Triple x P Q` is a *specification* for `x`:
If assertion `P` holds before `x`, then postcondition `Q` holds after running `x`.
`⦃P⦄ x ⦃Q⦄` is convenient syntax for `Triple x P Q`.
⦃term⦄term⦃term⦄
Given the meaning of Hoare triples above, the specification theorem for mkFresh says:
If c refers to the Supply.counter field of the Supply prestate, then running mkFresh returns c and modifies the Supply.counter of the poststate to be larger than c.
Note that this specification is lossy: mkFresh could increment its state by an arbitrary non-negative amount and still satisfy the specification.
This is good, because specifications may abstract over uninteresting implementation details, ensuring resilient and small proofs.
Hoare triples are defined in terms of a logic of stateful predicates plus a weakest precondition semantics wp⟦prog⟧ that translates monadic programs into this logic:
The WP type class maps a monad m to its PostShapeps, and this PostShape governs the exact shape of the Std.Do.Triple.
Many of the standard monad transformers such as StateT, ReaderT and ExceptT come with a canonical WP instance.
For example, StateT σ comes with a WP instance that adds a σ argument to every Assertion.
Stateful entailment ⊢ₛ eta-expands through these additional σ arguments.
For StateM programs, the following type is definitionally equivalent to Std.Do.Triple:
The common postcondition notation ⇓ r => ... injects an assertion of type α → Assertion ps into
PostCond ps (the ⇓ is meant to be parsed like fun); in case of StateM by adjoining it with an empty tuple PUnit.unit.
The shape of postconditions becomes more interesting once exceptions enter the picture.
The notation ⌜p⌝ embeds a pure hypotheses p into a stateful assertion.
Vice versa, any stateful hypothesis P is called pure if it is equivalent to ⌜p⌝
for some p.
Pure, stateful hypotheses may be freely moved into the regular Lean context and back.
(This can be done manually with the mpure tactic.)
18.1.4.1. An advanced note about pure preconditions and a notion of frame rule
This subsection is a bit of a digression and can be skipped on first reading.
Say the specification for some Aeneas-inspired monadic addition function x +? y : M UInt8 has the
requirement that the addition won't overflow, that is, h : x.toNat + y.toNat ≤ UInt8.size.
Should this requirement be encoded as a regular Lean hypothesis of the specification (add_spec_hyp) or should this requirement be encoded as a pure precondition of the Hoare triple, using ⌜·⌝ notation (add_spec_pre)?
theorem add_spec_hyp (x y : UInt8) (h : x.toNat + y.toNat ≤ UInt8.size) :
⦃⌜True⌝⦄ x +? y ⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := ...
theorem add_spec_pre (x y : UInt8) :
⦃⌜x.toNat + y.toNat ≤ UInt8.size⌝⦄
x +? y
⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := ...
The first approach is advisable, although it should not make a difference in practice.
The VC generator will move pure hypotheses from the stateful context into the regular Lean context, so the second form turns effectively into the first form.
This is referred to as framing hypotheses (cf. the mpure and mframe tactics).
Hypotheses in the Lean context are part of the immutable frame of the stateful logic, because in contrast to stateful hypotheses they survive the rule of consequence.
18.1.5. Monad transformer stacks
Real-world programs often orchestrate the interaction of independent subsystems through a stack of monad transformers.
We can tweak the previous example to demonstrate this.
Transformer Stacks
Suppose that mkFresh is generalized to work in any base monad m under StateTSupply.
Furthermore, mkFreshN is defined in terms of a concrete monad transformer stack AppM and lifts mkFresh into AppM.
Then the mvcgen-based proof goes through unchanged:
The WPMonad type class asserts that wp⟦prog⟧ distributes over the Monad operations (“monad morphism”).
This proof might not look much more exciting than when only a single monad was involved.
However, under the radar of the user the proof builds on a cascade of specifications for MonadLift instances.
18.1.6. Exceptions
If let mut is the first-order pendant to StateT, then early return is the first-order pendant to ExceptT.
We have seen how the mvcgen copes with StateT; here we will look at the program logic's support for ExceptT.
Exceptions are the reason why the type of postconditions PostCondαps is not simply a single condition of type α→Assertionps for the success case.
To see why, suppose the latter was the case, and suppose that program prog throws an exception in a prestate satisfying P.
Should we be able to prove ⦃P⦄prog⦃⇓r=>Q'r⦄?
(Recall that ⇓ is grammatically similar to fun.)
There is no result r, so it is unclear what this proof means for Q'!
So there are two reasonable options, inspired by non-termination in traditional program logics:
Total correctness interpretation: ⦃P⦄prog⦃⇓r=>Q'r⦄ asserts that, given P holds, then prog terminates andQ' holds for the result.
Partial correctness interpretation: ⦃P⦄prog⦃⇓r=>Q'r⦄ asserts that, given P holds, and ifprog terminates thenQ' holds for the result.
The notation ⇓r=>Q'r has the total interpretation (cf. PostCond.noThrow), while ⇓?r=>Q'r has the partial interpretation (cf. PostCond.mayThrow).
Std.Do.PostCond.{u}(α:Type u)(ps:Std.Do.PostShape):Type u
Std.Do.PostCond.{u}(α:Type u)(ps:Std.Do.PostShape):Type u
A postcondition for the given predicate shape, with one Assertion for the terminating case and
one Assertion for each .except layer in the predicate shape.
A postcondition expressing total correctness.
That is, it expresses that the asserted computation finishes without throwing an exception
and the result satisfies the given predicate p.
A postcondition expressing partial correctness.
That is, it expresses that if the asserted computation finishes without throwing an exception
then the result satisfies the given predicate p.
Nothing is asserted when the computation throws an exception.
syntaxPostconditions
term::= ...
|A postcondition expressing total correctness.
That is, it expresses that the asserted computation finishes without throwing an exception
*and* the result satisfies the given predicate `p`.
⇓term*=>term
term::= ...
|A postcondition expressing partial correctness.
That is, it expresses that *if* the asserted computation finishes without throwing an exception
*then* the result satisfies the given predicate `p`.
Nothing is asserted when the computation throws an exception.
⇓?term*=>term
So in the running example, ⦃P⦄prog⦃⇓r=>Q'r⦄ is unprovable, but ⦃P⦄prog⦃⇓?r=>Q'r⦄ is trivially provable.
However, the binary choice suggests that there is actually a spectrum of correctness properties to express.
The notion of postconditions PostCond in Std.Do supports this spectrum.
For example, suppose that our Supply of fresh numbers is bounded and we want to throw an exception if the supply is exhausted.
Then mkFreshN should throw an exception only if the supply is indeed exhausted.
The following correctness property expresses this:
Just as any StateTσ-like layer in the monad stack gives rise to a PostShape.argσ layer in the ps
that WP maps into, any ExceptTε-like layer gives rise to a PostShape.exceptε layer.
Every PostShape.argσ adds another σ → ... layer to the language of Assertions.
Every PostShape.exceptε leaves the Assertion language unchanged, but adds another exception
condition to the postcondition.
This is an abbreviation for SPred under the hood, so all theorems about SPred apply.
Hence the WP instance for EStateMεσ maps to the PostShapePostShape.exceptε(.argσ.pure), just
as for ExceptTε(StateMσ).
18.1.7. Extending mvcgen with support for custom monads
The mvcgen framework is designed to be extensible.
None of the monads so far have in any way been hard-coded into mvcgen.
Rather, mvcgen relies on instances of the WP and WPMonad type class and user-provided specifications to generate verification conditions.
The WP instance defines the weakest precondition interpretation of a monad m into a predicate transformer PredTransps,
and the matching WPMonad instance asserts that this translation distributes over the Monad operations.
Std.Do.PredTrans.{u}(ps:Std.Do.PostShape)(α:Type u):Type u
Std.Do.PredTrans.{u}(ps:Std.Do.PostShape)(α:Type u):Type u
The type of predicate transformers for a given ps:PostShape and return type α:Type.
A predicate transformer x:PredTranspsα is a function that takes a postcondition
Q:PostCondαps and returns a precondition x.applyQ:Assertionps, with the additional
monotonicity property that the precondition is stronger the stronger the postcondition is:
Q₁⊢ₚQ₂→x.applyQ₁⊢ₛx.applyQ₂.
Std.Do.WP.{u,v}(m:Type u→Type v)(ps:outParamStd.Do.PostShape):Type (max (u + 1) v)
Std.Do.WP.{u,v}(m:Type u→Type v)(ps:outParamStd.Do.PostShape):Type (max (u + 1) v)
A weakest precondition interpretation of a monadic program x:mα in terms of a
predicate transformer PredTranspsα.
The monad m determines ps:PostShape. See the module comment for more details.
Suppose one wants to use mvcgen to generate verification conditions for programs generated by Aeneas.
Aeneas translates Rust programs into Lean programs in the following Result monad:
The WP instance above translates programs in Resultα to predicate transformers in PredTranspsα.
That is, a function in PostCondαps→Assertionps, mapping a postcondition to its weakest precondition.
Note that this definition of the WP instance determines what properties can be derived from proved specifications via Result.of_wp.
This lemma defines what “weakest precondition” means.
To exemplify the second part, here is an example definition of UInt32 addition in Result that models integer overflow:
It is a priority of mvcgen to break down monadic programs into VCs that are straightforward to understand.
For example, when the monad stack is monomorphic and all loop invariants have been instantiated, an invocation of all_goals mleave should simplify away any Std.Do.SPred specific constructs and leave behind a goal that is easily understood by humans and grind.
This all_goals mleave step is carried out automatically by mvcgen after loop invariants have been instantiated.
However, there are times when mleave will be unable to remove all Std.Do.SPred constructs.
In this case, verification conditions of the form H⊢ₛT will be left behind.
The assertion language Assertion translates into an Std.Do.SPred as follows:
term::= ...
|Embedding of pure Lean values into `SVal`. An alias for `SPred.pure`. ⌜term⌝
term::= ...
|Entailment in `SPred`; sugar for `SPred.entails`. term⊢ₛterm
term::= ...
|Bi-entailment in `SPred`; sugar for `SPred.bientails`. term⊣⊢ₛterm
term::= ...
|spred(`And a b`, or `a ∧ b`, is the conjunction of propositions. It can be
constructed and destructed like a pair: if `ha : a` and `hb : b` then
`⟨ha, hb⟩ : a ∧ b`, and if `h : a ∧ b` then `h.left : a` and `h.right : b`.
Conventions for notations in identifiers:
* The recommended spelling of `∧` in identifiers is `and`.term∧term)
term::= ...
|spred(`Or a b`, or `a ∨ b`, is the disjunction of propositions. There are two
constructors for `Or`, called `Or.inl : a → a ∨ b` and `Or.inr : b → a ∨ b`,
and you can use `match` or `cases` to destruct an `Or` assumption into the
two cases.
Conventions for notations in identifiers:
* The recommended spelling of `∨` in identifiers is `or`.term∨term)
term::= ...
|spred(`Not p`, or `¬p`, is the negation of `p`. It is defined to be `p → False`,
so if your goal is `¬p` you can use `intro h` to turn the goal into
`h : p ⊢ False`, and if you have `hn : ¬p` and `h : p` then `hn h : False`
and `(hn h).elim` will prove anything.
For more information: [Propositional Logic](https://lean-lang.org/theorem_proving_in_lean4/propositions_and_proofs.html#propositional-logic)
Conventions for notations in identifiers:
* The recommended spelling of `¬` in identifiers is `not`.¬term)
term::= ...
|spred(∀ident,term)
term::= ...
|spred(∃`binderIdent` matches an `ident` or a `_`. It is used for identifiers in binding
position, where `_` means that the value should be left unnamed and inaccessible.
ident,term)
A common case for when a VC of the form H⊢ₛT is left behind is when the base monad m is polymorphic.
In this case, the proof will depend on a WPmps instance which governs the translation into the Assertion language, but the exact correspondence to σs : List (Type u) is yet unknown.
To successfully discharge such a VC, mvcgen comes with an entire proof mode that is inspired by that of the Iris concurrent separation logic.
(In fact, the proof mode was adapted in large part from its Lean clone, iris-lean.)
The Lean 4 test file tests/lean/run/spredProofMode.lean contains many examples of that proof mode to learn from, and the reference manual contains a list of all proof mode tactics.