[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[tlaplus] Re: Using PlusCal/TLA+ to model an interrupt driven system



You have to add a constraint to your main (non-interrupt) code logic so that the system will only take a non-interrupt step if there are no IRQs on the queue.

There are a variety of ways to model this; it might be easiest to model this as two queues, one of which is a queue of non-interrupt instructions and the other as a queue of IRQs. Your CPU will execute any IRQs in the interrupt queue, unless it's empty, in which case it starts executing instructions in the non-interrupt queue.

Andrew

On Tuesday, November 30, 2021 at 10:36:46 AM UTC-5 Erling Jellum wrote:
Hi people.

I am a beginner in TLA+ and PlusCal and I am running into some problems that make me think that I have some fundamental misconceptions about PlusCal/TLA+.

My goal is to model the possible behaviours of a piece of software running on a microcontroller. This software is interrupt-driven. There is a bunch of peripherals interrupting program flow and then the CPU handles those interrupts. Eventually I wanna verify real-time properties of this system.

My first attempt is to model this as the peripherals adding IRQs to a FIFO and the CPU dequeuing IRQs from the FIFO and do the ISR. At this level I dont model the ISR at all, its really just dequeuing IRQs.

It seems to me that the PlusCal processes are assumed to be threads and that any interleaving of the threads is a behaviour. In my case once there is an IRQ the CPU handles it, immediately, there is no possibility of "stuttering" and that peripherals just keep adding IRQs. When checking my model with TLC it immediately fails as a possible behaviour is that the peripherals just adds IRQs while the CPU (even though enabled) never does anything. I have read about fairness, but it does not seem to be able to solve my problem here. I have also read in an earlier email here that

I would be very grateful for pointers on how to model this more correctly.  Keep in mind that I want to extend this model with minimum latencies between interrupt requests and a WCET for the interrupt handling. In the end, I want to formally verify that I under no circumstance will lose data because a new IRQ has arrived before the previous one was processed

---------------------------- MODULE interrupts ----------------------------
EXTENDS Integers, Sequences, TLC, FiniteSets
CONSTANT Peripherals

(**)
MaxPendingIrqs == Cardinality(Peripherals)


(* --algorithm sb
variables irq_queue = <<>>;

(*CPU handles incoming IRQ by just dequeuing it*)
macro handle_isr()
begin
  assert Len(irq_queue) > 0;
  irq_queue := Tail(irq_queue);
end macro;

(*Peripherals signal the CPU by appending an IRQ to the FIFO*)
macro signal_irq(t)
begin
  assert(Len(irq_queue) < MaxPendingIrqs);
  irq_queue := Append(irq_queue, t);
end macro;


process cpu = "cpu"
begin
A:
  while TRUE do
  await Len(irq_queue) > 0;
  handle_isr()
  end while;
end process;


process peripherals \in Peripherals
begin
A:
  while TRUE do
    either
      signal_irq(self);
    or
      skip
  end either;
  end while;
end process;
end algorithm *)
=======================================================================

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/00c106a7-d362-4cf7-9367-5b50ddd7c8f9n%40googlegroups.com.