[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] Modelling thread preemption



Hi,

I made a few changes to your module:

- I turned your procedures into macros: that's not essential, but since there is no recursion, the procedure bodies contain simple straight-line code, and you have a label in front of every call anyway, it looks more natural to me, the translation to TLA+ gets simpler, and the number of states is reduced a lot due to the rather useless intermediate states.

- I removed the startup code from the threads: I didn't understand why the initial processor assignment would be based on a race rather than on a decision by the scheduler (through the "schedule" operation). This allowed me to get rid of your hack with making ProcSet dynamic: a thread can execute only if it is the chosen one, although the set of potentially executable threads is static.

- Consequently, I initialized "running" to the empty set. I also rewrote the invariant to a formulation that looks easier to me.

For your module, verification of liveness fails because the generated fairness condition is

        /\ \A self \in running : WF_vars(Thread(self)) /\ WF_vars(do_work(self))

and TLC can only handle quantifiers over constant sets in liveness checking. With the above modifications, the bound of the quantifier is THREADS and liveness checking becomes possible. However, TLC produces a counter-example in which a terminated thread gets scheduled repeatedly although there is some non-terminated thread.

I therefore rewrote the schedule operation so that it will only schedule threads that have not finished (and "idle" if all threads have finished). Now, assuming weak fairness we still get a counter-example in which an executable thread gets scheduled but pre-empted before it does any work. This happens because the thread is not persistently executable (it can't execute while the interrupt handler is active). Using strong fairness for the threads the liveness condition is checked.

The overall module is below. I am not saying that it is better than yours: some changes are purely stylistic in nature.

Best regards,
Stephan

------------------------------- MODULE sched -------------------------------
EXTENDS Naturals, Sequences, FiniteSets, TLC

CONSTANTS CPU,
 THREADS

 

ASSUME CPU \notin THREADS

(* --algorithm sched {
variables
 running = {};
 current = "idle";
 worked = [t \in THREADS |-> FALSE];

define {
 executable ==  \* threads eligible for scheduling
   {th \in THREADS : ~ worked[th]}

 SchedInv == \/ current = "idle" /\ running = {}
             \/ running = {current}

 AllWorked == <>(\A t \in THREADS : worked[t])
}

macro do_work(thread)
{
 worked[thread] := TRUE;
}

macro do_handler()
{
 skip; \* some work
}

macro schedule()
{
 \* pick any executable thread, even the current one
 if (executable = {}) {
   running := {}; current := "idle"
 } else {
   with (t \in executable) {
     running := {t};
     current := t
   }
 };
}

\* Threads
fair+ process (Thread \in THREADS)
{
main:
 while (TRUE) { 
   await(current = self);
   do_work(self)
 }
}

\* Interrupts
fair process (Interrupt = CPU)
{
interrupt:
 while (TRUE) {
 \* stop current thread
 running := {};
 current := "idle";
handler:
 do_handler();
resched:
 schedule();
 }
}
} *)
=============================================================================



On 12 Jan 2018, at 15:09, Catalin Marinas <catalin...@xxxxxxxxx> wrote:

Hi,

The usual disclaimer for a first post to this group, I’m new to TLA+ (and I already found it useful discovering a couple of software bugs [1]).

What I’m trying to model now is OS thread preemption on a single CPU [2] (uniprocessor; my final model has multiple CPUs). Basically only one thread can run at a time (and modify the CPU state) while it can be interrupted and scheduled out at any point (well, based on the labels granularity). Modelling each thread as a TLA+ process matches closely the way Linux kernel programmers think about concurrency and preemption. The interrupt handler is modelled as yet another TLA+ process. All these processes (threads and interrupt handler) race to modify some CPU state (not defined here).

Since only one process is allowed to run at a time, I found it easier to make ProcSet non-constant: the union of a dynamically changing "running" processes and the CPU. Note that an interrupt handler doesn’t run in parallel with any other process ("running" is emptied temporarily).

While it seems to work, the downside is that "running" being non-constant causes TLC to not be able to check some temporal properties ("AllWorked" in the spec below), complaining that it cannot handle the temporal formula Spec (I used 'pcal -wf' for fairness). If I ignore this property, it seems to check successfully but wanted to ensure that each thread is eventually scheduled.

Defining the process Thread \in THREADS wouldn't go well with enforcing a single thread "active" at a time (between two schedule() calls) unless I add some "await" statement after every label (not really feasible).

The alternative I was thinking of is to define a single process (per CPU) which would either call the interrupt handler or the current thread. However, modelling thread preemption would require modelling the thread execution as a state machine manually with something like a "thread_pc", pretty much mimicking what PlusCal generates but harder to read.

My question: is there a better solution? Or I should go for a deeper abstraction than just modelling OS threads directly?

Thanks,

Catalin

In the ARM architecture (and not only), the TLB entries (cache for virtual to physical address translation) are tagged with an ASID (application-specific id), unique per process (threads of an application share the same ASID). Since the number of ASIDs can be smaller than the number of processes running, we have an algorithm in Linux to re-use such ASIDs without forcing a temporary stop of all the CPUs when running out. TLC confirmed one bug+fix that we were investigating (the CnP case in the spec) and discovered a new one after nearly 200 steps (http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/551765.html).

[2] Simplified preemptible threads model. SMP, various threads state and type invariants removed here to keep things short:

sched.tla:
------------------------------ MODULE sched ---------------------------------
EXTENDS
Naturals, Sequences, FiniteSets, TLC

CONSTANTS CPU
,
 THREADS

(* --algorithm sched {
variables
 running
= THREADS;
 current
= "idle";
 worked
= [t \in THREADS |-> FALSE];

define
{
 
SchedInv == /\ \A t \in running : current \in {t, "idle"}
             /
\ current # "idle" => Cardinality(running) <= 1

 
AllWorked == <>(\A t \in THREADS : worked[t])
}

procedure do_work
(thread)
{
do_work
:
 worked
[thread] := TRUE;
 
return;
}

procedure do_handler
()
{
do_handler
:
 skip
; \* some work
 
return;
}

procedure schedule
()
{
schedule
:
 
\* pick any thread, even the current one
 
with (t \in THREADS) {
 running
:= {t};
 current
:= t;
 
};
 
return;
}

\* Threads
process
(Thread \in running)
{
init_thread
:
 
\* only a single thread allowed at the same time
 running
:= {self};
 current
:= self;
start_thread
:
 
assert self \in running;
main
:
 
while (TRUE)
 call do_work
(self);
}

\* Interrupts
process
(Interrupt = CPU)
{
interrupt
:
 
while (TRUE) {
 
\* stop current thread
 running
:= {};
handler
:
 call do_handler
();
resched
:
 call schedule
();
 
}
}
} *)
==============================================================================


sched.cfg:
SPECIFICATION Spec
CONSTANT defaultInitValue
= defaultInitValue
\* Add statements after this line.

CONSTANTS CPU
= p1
 THREADS
= {t1, t2}

INVARIANTS
SchedInv

\*PROPERTY AllWorked


--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
Visit this group at https://groups.google.com/group/tlaplus.
For more options, visit https://groups.google.com/d/optout.