# Re: [tlaplus] What to do when strong fairness isn't strong enough?

Hi,

I've found a fairness requirement that works in this case, shown below.

But first, worth mentioning that TLC is right: according to the specification, it's not clear that the actors will ever converge.
For example, maybe in practice one actor moves to PREPARE state at 3 PM once per day, waits 1 hour, and if no other actor is in PREPARE state, goes back to IDLE.
The other actor moves to PREPARE state at 10 PM and waits for 1 hour. They'll never align.

But, if you can convince yourself that in your system that's not the case, then for two actors we need another strong fairness requirement, informally:
(i) If one actor is in the PRAPRE state infinitely often, then the other actor will move to the PREPARE state as well. After that, your strong fairness requirement (3) will take place.
For three actors, we need two strong fairness conditions:
(ii) If two actors are in the PREPARE state infinitely often, then the third actor will move to that state as well, combined with condition (i).
For four actors we will need already three strong fairness conditions.

What's happening here is that one actor is periodically in the PREPARE state, but there's a requirement that if that happens then another actor will move to that state.
Now two actors are periodically in the PREPARE state, and there's a requirement that if that happens then another actor will move to that state.
And so on and so forth.

The specification and the MC files are available here: https://gist.github.com/samyonr/257b8a04f0111cb8fd98068901302795

In the link, ConvergingLivenessCondition.tla is the spec itself.
In it, you'll find the basic spec, for any number of actors; An example of a strong requirement for two actors; Another example for three actors; A generalization that doesn't work because of Java StackOverflowError exception due to usage of recursive definition that has SF_vars in it (to make it similar to the specific cases of two and three actors); And a workaround without recursion of any number of actors.
Also, you can find there MC files for two actors, three actors, three actors but with a generalized approach but manually written, and six actors with a generalized approach that will work for any number of actors without having to manually write that.

Note that it's heavy. Running it for six actors took me 12 seconds, and I wasn't able to complete it for ten actors because I ran out of memory (which can be adjusted via the configuration).

Here's the generalized approach:

ChooseActorNotInPrepare(actors) ==  CHOOSE self \in actors: pc[self] # "PREPARE"
ChooseActorInPrepare(actors) == CHOOSE self \in actors: pc[self] = "PREPARE"

ConvergingGeneralAction == pc' = [pc EXCEPT ![ChooseActorNotInPrepare(ProcSet)] = "PREPARE"]

RECURSIVE InPrepareState(_,_)

InPrepareState(n, actors) == IF n = 0 THEN TRUE
ELSE /\ n > 0
/\ actors # {}
/\ \E selfPrepared \in actors: pc[selfPrepared] = "PREPARE"
/\ InPrepareState(n-1, actors \ { ChooseActorInPrepare(actors) })

ConvergingConditionForNActors(n) ==
/\ InPrepareState(n, ProcSet)
/\ \E selfNotPrepare \in ProcSet: pc[selfNotPrepare] # "PREPARE"

ConvergingSpec == /\ Spec
/\ \A i \in 1..(NumOfActors-1): SF_vars(ConvergingConditionForNActors(i) /\ ConvergingGeneralAction)

THEOREM ConvergingSpec => EventuallyDone

Hope it helps,
Samyon.

On Wednesday, December 13, 2023 at 5:34:27 PM UTC+2 Andrew Helwer wrote:
Another approach (which I like to use in eventually-consistent systems) is, in an enclosing MC_Spec.tla, add another boolean variable called converge. The converge variable starts out false but at any given time can be set to true; if set to true, it disallows things that would interfere with the system converging, like in eventually-consistent systems it stops additional writes from happening until the system reaches the goal state. In your system it would stop agents transitioning back to idle, or stop whatever it is that causes them to do that. Then your liveness property becomes converge ~> Liveness.

Andrew

On Wednesday, December 13, 2023 at 9:45:09 AM UTC-5 Stephan Merz wrote:
Hi Calvin,

nice problem! The counter-example computed by TLC shows that there exists a behavior in which it is never the case that all agents are PREPARED. No fairness condition on the transitions of single agents will imply that the states of all agents are eventually aligned.

Now, if you model the (initial part of each) agent as a continuous time Markov chain with two states IDLE and PREPARED and some positive transition rates (w.r.t. an exponential distribution) between those two states then you can convince yourself, and a tool such as Prism [1] can confirm, that the mass of executions of any fixed number of such agents such that at all times, at least one agent is in state IDLE, is zero. In other words, almost surely all agents will be PREPARED at some point (and continuing from there [2], they'll align again at some future point with probability 1.

If you are willing to rule out such executions, you could add the assumption

[]<>((\A a \in Agent : state[a] = "PREPARED") \/ (\E a \in Agent : state[a] \notin {"IDLE", "PREPARED"}))

to your spec, together with strong fairness for the transitions from PREPARED to DONE. The condition asserts that for as long as all agents perform the loop between IDLE and PREPARED, they'll repeatedly align on state PREPARED.

Stephan

[2] For the long-term analysis, you may without loss of generality assume that both states are initial.

On 13 Dec 2023, at 02:21, Calvin Loncaric <c.a.lo...@xxxxxxxxx> wrote:

Occasionally when I set out to specify liveness properties I'll run into a problem like this:

A number of actors each want to transition from IDLE to PREPARED, and then from PREPARED to DONE. Once an actor is DONE they stay DONE, but they can go backwards from PREPARED to IDLE if the situation looks dicey.

IDLE <---> PREPARED ---> DONE

1. It is always safe to go from IDLE to PREPARED
2. It is always safe to revert from PREPARED to IDLE
3. It is only safe to go from PREPARED to DONE if every actor is PREPARED. Fortunately, the actors can atomically read each other's state and modify their own, so there's no "Two Generals" problem here.
I'm willing to assume strong fairness on (1) and (3).

Amazingly (to me), this turns out not to be enough to ensure that any actor reaches the DONE state. TLC discovers a very clever loop in the attached two-actor spec:

Error: Temporal properties were violated.

Error: The following behavior constitutes a counter-example:

State 1: <Initial predicate>
/\ BobState = "IDLE"
/\ AliceState = "IDLE"

State 2: <AlicePrepare line 14, col 5 to line 16, col 25 of module NervousActors>
/\ BobState = "IDLE"
/\ AliceState = "PREPARED"

State 3: <AliceRevert line 19, col 5 to line 21, col 25 of module NervousActors>
/\ BobState = "IDLE"
/\ AliceState = "IDLE"

State 4: <BobPrepare line 24, col 5 to line 26, col 27 of module NervousActors>
/\ BobState = "PREPARED"
/\ AliceState = "IDLE"

Back to state 1: <BobRevert line 29, col 5 to line 31, col 27 of module NervousActors>

Sure enough, in that trace it is true that Alice always eventually prepares and Bob always eventually prepares, but they get very unlucky and never manage to be prepared at the same time.

I don't think there's a fairness criteria I can add to make the liveness property hold. What is an elegant way to express the (vague, informal) requirement that "Alice and Bob never stay misaligned forever"?

I have a few ideas, but none that generalize cleanly to systems with more than two actors or to algorithms with more than two steps per actor.

--
Calvin

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CABf5HMjvUJAMb5GW7wLsJPe5Y0deXqC1pr4pRzMZRAchufKOzQ%40mail.gmail.com.
<NervousActors.tla>

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.