[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tlaplus] Inductive Invariants and Counterexamples in TLAPS
> On 13.09.2021, at 00:55, Willy Schultz <will62794@xxxxxxxxx> wrote:
> Presumably, Apalache  is another candidate for this task, since it also produces an SMT encoding that can be given to an SMT solver. Their OOPSLA '19 paper  claims that it can prove certain inductive invariants automatically, and that it can detect invariant violations quickly as well.
Apalache assumes that all constants have been fixed (or that they belong to a finite set that is captured with ConstInit ). So for fixed parameters, we can prove inductive invariants. For instance, we have checked safety of the Tendermint consensus by proving an inductive invariant . It is true that invariant violations are often easier to find with Apalache. It basically expands all quantifiers in disjunctions and conjunctions, which is probably what model-based quantifier instantiation has to do anyways. For the same reason, proving inductive invariants with Apalache takes longer, as the solver cannot use the structure of the quantified formulas to easily show unsatisfiability.
We are planning to implement a quantified encoding, in addition to the current unquantified encoding. But to do so, we will have to restrict the fragment of TLA+. For instance, we don’t know what to do about Cardinality, if we use a quantified encoding. I will not be surprised, if that restricted fragment of TLA+ happens to be very similar to the language of Ivy.
> I suppose it would be interesting to compare the performance of the probabilistic method for finding inductiveness violations with Apalache, and/or compare Apalache with TLAPS.
Yes, that is interesting. It would be also interesting to compare inductiveness checking when using predefined bounds on the data structures. We have recently introduced the operator `Apalache!Gen` that tells the model checker to produce data structures of bounded sizes , similar to generators in property-based testing. Intuitively, this is another instance of the small model hypothesis.
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/A11B2B2E-E566-41C3-AEBA-FAA4BA1F7AEF%40gmail.com.