[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why TLA+ is based on explicit state model-checking instead of symbolic one?



I believe Stephan has a number of papers describing his (and others') work on inferring types for TLA+ expressions for use in TLAPS, but your description of interpreters vs. compilers (while there is no real formal difference due to partial evaluation theorems, there is a clear practical difference) made me think of an idea that I wonder if anyone has explored. 

Both compilers and modern model checkers are based on sound abstractions, which, at its most primitive level, correspond to the simple types that compilers can employ so well to emit efficient code, and model checkers and proof assistants can use to pick the right theory for SMT solvers. But today we have compilers that compile untyped languages to machine code that's as efficient as that for typed languages (often directly using those partial evaluation results I mentioned). They work by employing speculative unsound abstractions with guards that fall back to interpretation if the guard fails, and then re-compile the code based on a wider abstraction. Unfortunately, I know very little about CEGAR model checking, but from the slogans I get the sense that they work along the same lines, but in the reverse direction: if the abstraction is too wide to prove the requested property, a narrower one is tried. Would it be possible to employ the same technique used by modern partial-evaluation compilers to also widen abstractions in model checkers if the speculative type proves wrong? Just as those state-of-the-art compilers employ a combination of interpretation and compilation, perhaps so could model checkers.

Ron