[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] Modelling thread preemption



Hi Stephan,

Thanks for your reply. Some comments below, inlined:

On Friday, 12 January 2018 17:34:24 UTC, Stephan Merz wrote:
- I turned your procedures into macros: that's not essential, but since there is no recursion, the procedure bodies contain simple straight-line code, and you have a label in front of every call anyway, it looks more natural to me, the translation to TLA+ gets simpler, and the number of states is reduced a lot due to the rather useless intermediate states.

My spec was simplified to keep it short but something like do_work() is going to be large enough not to be a macro. Also, a macro does not allow labels and I really want to check a race condition between threads. For example, a thread is updating two CPU registers, say ttbr0 and ttbr1 (page table pointers). Being interrupted between these two non-atomic updates can have implications if the interrupt handler or another scheduled thread touch the same registers.

With additional labels, the only way to stop a thread when scheduled out is to add "await(current = self)" for every label.
 
- I removed the startup code from the threads: I didn't understand why the initial processor assignment would be based on a race rather than on a decision by the scheduler (through the "schedule" operation). This allowed me to get rid of your hack with making ProcSet dynamic: a thread can execute only if it is the chosen one, although the set of potentially executable threads is static.

This was indeed a hack. I could have added an await(current = self) at the start instead.
 
- Consequently, I initialized "running" to the empty set. I also rewrote the invariant to a formulation that looks easier to me.

Since Thread was defined "\in running", I needed "running" (and ProcSet) to include all processes initially otherwise Init doesn't do what's expected. Defining Thread \in THREADS needs additional await statements on each label of the process.
 
For your module, verification of liveness fails because the generated fairness condition is

        /\ \A self \in running : WF_vars(Thread(self)) /\ WF_vars(do_work(self)) 
 
and TLC can only handle quantifiers over constant sets in liveness checking. With the above modifications, the bound of the quantifier is THREADS and liveness checking becomes possible.

Indeed.
 
However, TLC produces a counter-example in which a terminated thread gets scheduled repeatedly although there is some non-terminated thread.

I therefore rewrote the schedule operation so that it will only schedule threads that have not finished (and "idle" if all threads have finished). Now, assuming weak fairness we still get a counter-example in which an executable thread gets scheduled but pre-empted before it does any work. This happens because the thread is not persistently executable (it can't execute while the interrupt handler is active). Using strong fairness for the threads the liveness condition is checked.

Thanks for the hints and reworked module.

I'll keep thinking about a different level of abstraction (too much programming may have affected my mental modelling ;)).
 
Catalin