[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] Invariance under transition order

Hi Ron,

I did something similar a few years ago for a round-based computational model of fault-tolerant algorithms. We proved that executions where all processes execute every round of the algorithm in a strictly synchronous way and asynchronous interleaving of the process executions are indistinguishable for each process. Hence, for properties that are insensitive to the ordering of events of different processes it is enough to prove them for the "coarse-grained", synchronous model of execution.

A paper describing this is available at http://www.loria.fr/~merz/papers/ijsi2009.html. The proof was never formalized in TLA+, but it was done in Isabelle/HOL and was used to prove several Consensus algorithms: http://afp.sourceforge.net/entries/Heard_Of.shtml (section 3 of the proof document contains the relevant definitions and theorems).

Perhaps this can be useful for your case.


On 01 Feb 2016, at 17:11, Ron Pressler <r...@xxxxxxxxxxxxxxxxxxx> wrote:

How would I go about checking that a system where certain actions taking place at some non-deterministic interleaving (let's call it an asynchronous system) is "essentially identical" -- i.e. does not introduce any bugs additional bugs -- to a system where actions can only occur in a certain order (let's call it the synchronous system)? Obviously, the asynchronous system implements the synchronous one, but I'd like to check the other direction. I could specify them both and show that they both satisfy the set of invariants I've defined, but I'm looking for a stronger resemblance, which I can't quite formally define, a resemblance that means that no property on certain "externally observable" behavior, whether explicitly defined or not, could possibly be violated by the asynchronous system. I think what I'm looking for is a manual partial-order reduction, but I'm not quite clear on the formal definition of that, either (although I may need to look into that in more depth).

A bit more concretely, suppose I'm modeling a distributed system where one node may notice a change in another node's state, and at some later time it may notice another change, and I'd like to check whether that nondeterminism is any "worse" than noticing both changes at once (or in two consecutive transitions). If true, my hypothesis would make abstracting the system (i.e., the opposite of refinement) much easier.


You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
Visit this group at https://groups.google.com/group/tlaplus.
For more options, visit https://groups.google.com/d/optout.