[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] What to do when strong fairness isn't strong enough?



Another approach (which I like to use in eventually-consistent systems) is, in an enclosing MC_Spec.tla, add another boolean variable called converge. The converge variable starts out false but at any given time can be set to true; if set to true, it disallows things that would interfere with the system converging, like in eventually-consistent systems it stops additional writes from happening until the system reaches the goal state. In your system it would stop agents transitioning back to idle, or stop whatever it is that causes them to do that. Then your liveness property becomes converge ~> Liveness.

Andrew

On Wednesday, December 13, 2023 at 9:45:09 AM UTC-5 Stephan Merz wrote:
Hi Calvin,

nice problem! The counter-example computed by TLC shows that there exists a behavior in which it is never the case that all agents are PREPARED. No fairness condition on the transitions of single agents will imply that the states of all agents are eventually aligned. 

Now, if you model the (initial part of each) agent as a continuous time Markov chain with two states IDLE and PREPARED and some positive transition rates (w.r.t. an exponential distribution) between those two states then you can convince yourself, and a tool such as Prism [1] can confirm, that the mass of executions of any fixed number of such agents such that at all times, at least one agent is in state IDLE, is zero. In other words, almost surely all agents will be PREPARED at some point (and continuing from there [2], they'll align again at some future point with probability 1.

If you are willing to rule out such executions, you could add the assumption

[]<>((\A a \in Agent : state[a] = "PREPARED") \/ (\E a \in Agent : state[a] \notin {"IDLE", "PREPARED"}))

to your spec, together with strong fairness for the transitions from PREPARED to DONE. The condition asserts that for as long as all agents perform the loop between IDLE and PREPARED, they'll repeatedly align on state PREPARED.

Stephan

[1] https://www.prismmodelchecker.org
[2] For the long-term analysis, you may without loss of generality assume that both states are initial.


On 13 Dec 2023, at 02:21, Calvin Loncaric <c.a.lo...@xxxxxxxxx> wrote:

Occasionally when I set out to specify liveness properties I'll run into a problem like this:

A number of actors each want to transition from IDLE to PREPARED, and then from PREPARED to DONE. Once an actor is DONE they stay DONE, but they can go backwards from PREPARED to IDLE if the situation looks dicey.

IDLE <---> PREPARED ---> DONE

The actors follow these rules:
  1. It is always safe to go from IDLE to PREPARED
  2. It is always safe to revert from PREPARED to IDLE
  3. It is only safe to go from PREPARED to DONE if every actor is PREPARED. Fortunately, the actors can atomically read each other's state and modify their own, so there's no "Two Generals" problem here.
I'm willing to assume strong fairness on (1) and (3).

Amazingly (to me), this turns out not to be enough to ensure that any actor reaches the DONE state. TLC discovers a very clever loop in the attached two-actor spec:

Error: Temporal properties were violated.

Error: The following behavior constitutes a counter-example:

State 1: <Initial predicate>
/\ BobState = "IDLE"
/\ AliceState = "IDLE"

State 2: <AlicePrepare line 14, col 5 to line 16, col 25 of module NervousActors>
/\ BobState = "IDLE"
/\ AliceState = "PREPARED"

State 3: <AliceRevert line 19, col 5 to line 21, col 25 of module NervousActors>
/\ BobState = "IDLE"
/\ AliceState = "IDLE"

State 4: <BobPrepare line 24, col 5 to line 26, col 27 of module NervousActors>
/\ BobState = "PREPARED"
/\ AliceState = "IDLE"

Back to state 1: <BobRevert line 29, col 5 to line 31, col 27 of module NervousActors>

Sure enough, in that trace it is true that Alice always eventually prepares and Bob always eventually prepares, but they get very unlucky and never manage to be prepared at the same time.

I don't think there's a fairness criteria I can add to make the liveness property hold. What is an elegant way to express the (vague, informal) requirement that "Alice and Bob never stay misaligned forever"?

I have a few ideas, but none that generalize cleanly to systems with more than two actors or to algorithms with more than two steps per actor.

--
Calvin



--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CABf5HMjvUJAMb5GW7wLsJPe5Y0deXqC1pr4pRzMZRAchufKOzQ%40mail.gmail.com.
<NervousActors.tla>

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/9a8c860e-bbc0-4e06-a9aa-f8fb6c9df579n%40googlegroups.com.