[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] Novel queue-lock like concurrent spin-lock formalization in TLA+



One alternative is accepting the factorial nature of things and run your model for however long in simulator mode, gaining additional confidence in its linearizability over time.

Andrew
On Monday, March 22, 2021 at 1:06:45 PM UTC-4 Dan Tisdall wrote:
Hi Andrew,

Thanks for your comment and reference to Mr Chands work, I will take a deeper look at that. The point that you raise regarding blowup is a good one. My current idea for somewhat handling this can be described as follows:
I will use an operation counter to fix the total number of data structure operations across all threads and then make exactly one process print the history variable after the operations have all completed. By using VIEW I can solve the problem you describe but that would create a new one at the same time.
Namely that it will now be the case that different histories (all of which I want to check) will be ignored if the algorithm/data structures state is the same. I am thinking that this would be OK in the case of my data structure. I suppose this is falling back on the small-model hypothesis somewhat.

Best,

Dan

On Monday, March 22, 2021 at 3:27:07 PM UTC andrew...@xxxxxxxxx wrote:
History logs are difficult to model-check because they cause a factorial blowup of the model (since action A happening before action B leads to a different state than action B happening before action A, even if the non-history system state is identical). Saksham Chand did some interesting stuff using history variables to prove correctness using TLAPS instead of TLC, which might be relevant to you: https://www.microsoft.com/en-us/research/video/specification-and-verification-of-multi-paxos/

Andrew

On Sunday, March 21, 2021 at 12:03:35 PM UTC-4 Dan Tisdall wrote:
Hi Steven,

I was wondering if you ever managed to check the linearizability of your algorithm. I am also working on a model for which I would I would like to check the linearizability and I wondered if you had succeeded in doing so, and if so, how you went about tracking and exporting a log?

Thanks and best,

Dan

On Saturday, June 24, 2017 at 5:50:17 PM UTC+1 Stephan Merz wrote:
Hi Steven,

sorry for replying so late: I had a look at your PlusCal model of the algorithm, and I think that it is very competently written. (I have not tried to understand the algorithm, though.)

A (standard) suggestion for mitigating state space explosion is to reduce labels as much as you can, and of course as far as it doesn't impact the verdict of model checking. For example, I noticed some patterns like the following:

            if Acquired then
                RET: return;
            end if;
        UPDATE_COUNTER:
            Counter := Counter + 1;

and I believe that the only reason for having label UPDATE_COUNTER is syntactic, since PlusCal needs a label at the statement following a conditional statement. However, you can rewrite your code to the semantically equivalent

            if Acquired then
                RET: return;
            else
              Counter := Counter + 1;
            end if;

and save a label. This is relevant because TLC will non-deterministically schedule any executable thread at every label.

Of course, state explosion will set in despite these changes, and then you should think about how many threads you would like to check in order to be reasonably confident about correctness.

Best regards,
Stephan


On 16 Jun 2017, at 02:12, Steven Stewart-Gallus <stevensele...@xxxxxxxxx> wrote:

See https://sstewartgallus.com/git?p=qlock.git;a=blob;f=tla/StackLock.tla;h=6884d808b5ffd605e912f4b12ae38928fd9b8313;hb=HEAD for the code.

On many-core machines spin locks can be slow because the cores ping each other constantly. To improve on this people often use test-and-test and set locks as loads from a memory location are often cheaper than exchange and compare and exchange operations. However, there is still a small amount of intercore communication which can be expensive on machines with a whole lot of cores. To improve on this the MCS and CLH queue locks were invented. Basically, in these algorithms each waiting core spins on its own separate memory location which is linked into a queue. This may possibly be faster in some scenarios although for little-core machines the added complexity is often slower. A deficiency with these algorithms is that the waiters are ordered in strict FIFO queue order and no other scheme is possible. For example, one might want to use a priority queue (for hard real-time scenarios) or a stack (for better raw performance.) I came up with an algorithm (which I think is new but might possibly not be as I am not really an expert on this topic) that allows the ADT details to be easily replaced by any structure that offers push, pop and isEmpty functions. This algorithm can then be used with stacks or priority queues.

I implemented this algorithm in Rust (with the specific case of a stack) and then formalized the abstract details in TLA+ which led to me discovering and fixing a deadlock bug. So far, I have run and tested the model under a number of threads but past 4 or so threads the model checker becomes very slow. Future steps should involve checking to see if more properties can be asserted and tested under the model, scaling the analysis to a larger number of threads, possibly formalizing an abstract proof of correctness for any number of threads and to experimentally validate and check an operation log of the implementation for linearizability with a checker such as Knossos.

Does my TLA+ code look correct? Can anyone any further steps I should take? Does anyone have any other thoughts?

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
Visit this group at https://groups.google.com/group/tlaplus.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/331369b3-f76f-4dfd-882a-9d798f4b3d6cn%40googlegroups.com.