[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tlaplus] TLC on the GPU
Hi,
I am joining this conversation a bit late. It seems to me that the main challenge with parallelizing explicit-state enumeration is that transition systems of concurrent and distributed algorithms do not have much locality.
Since Apalache was mentioned in this thread, there is probably a way to exploit GPUs for speeding up symbolic model checking. The main SMT encoding that is implemented in Apalache is almost propositional, with the exception being made for integers and string constants [1]. So instead of translating a TLA+ spec to SMT, we could translate it to SAT. (If we fix the bit width, we can translate integers by bit blasting.) That would let us run a GPU-enabled SAT solver such as ParaFROST [2].
Apalache was designed with alternative encodings in mind. For instance, we are currently experimenting with SMT arrays, and this required well-isolated changes in the rewriting rules. So this all sounds nice in theory. In practice an alternative encoding would require at least 6 months of devoted work. If somebody is looking for an internship on this topic at Informal Systems [3] or wants to do it as part of their research, please let me know. It is much easier to extend Apalache than to write a new model checker for TLA+ from scratch.
PS: Alex, thanks for the kind words about Apalache :)
[1] TLA+ model checking made symbolic: https://dl.acm.org/doi/10.1145/3360549
[2] https://github.com/muhos/ParaFROST
[3] https://informal.systems/
Best regards,
Igor
> On 06.02.2022, at 21:57, Jones Martins <jonesmvc@xxxxxxxxx> wrote:
>
> Hi, Alex and Steve
>
> I had found BFS on GPU articles before, and the ones Alex referenced are also impressive.
> Of course, if there's already a "very good” (as opposed to optimal) alternative to simple multicore verification, that's fine by me.
>
> Jones
>
>
> On Sun, 6 Feb 2022 at 15:35, Alex Weisberger <alex.m.weisberger@xxxxxxxxx> wrote:
> You can find the details of how TLC model checking works in Section 14.3.1 of Specifying Systems: https://lamport.azurewebsites.net/tla/book-02-08-08.pdf.
>
> I think the challenge is that TLC effectively "brute-forces" the state space exploration by performing a graph search. GPU instructions are generally optimized for parallel arithmetic, so I don't think traditional graph search algorithms translate neatly to optimizations with GPU instructions.
>
> There may be a way to rephrase algorithms though to make them amenable to vector processing. I found a couple papers / presentations on the topic:
>
> https://research.nvidia.com/sites/default/files/pubs/2011-08_High-Performance-and/BFS%20TR.pdf
> https://on-demand.gputechconf.com/gtc/2012/presentations/S0600-Scalable-GPU-Graph-Traversal.pdf
>
> Also, if you are just interested in performance and aren't tied to using the GPU as a solution for that, there is the Apalache model checker as well: https://apalache.informal.systems/. This converts model-checking to an SMT constraint problem which can be solved very quickly by SMT solvers such as Z3. I haven't used it, but have always thought this project looks very interesting.
>
>
> On Sun, Feb 6, 2022 at 1:07 PM <sglaser@xxxxxxxxxxx> wrote:
> Let me know if I can help with this.
>
> I don’t know enough about the TLA+ internals to know what makes sense here.
>
> I’m an occasional TLA+ user and work for Nvidia.
>
> Steve Glaser
> sglaser@xxxxxxxxxx
> On Feb 6, 2022, 8:46 AM -0800, Jones Martins <jonesmvc@xxxxxxxxx>, wrote:
>> Hi everyone,
>>
>> Would a GPU model-checker be useful for large models in TLA⁺? For example, when compared to multicore CPU systems, GPUs are much faster in certain scenarios. Would model-checking be one of them?
>> I'm assuming that, in order to model-check, one would need to bring the parser to the GPU as well, so that reading from RAM doesn't become a bottleneck.
>>
>> Best,
>>
>> Jones
>>
>> --
>> You received this message because you are subscribed to the Google Groups "tlaplus" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/cc2b665d-9863-485e-bbac-04c652f0e92fn%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups "tlaplus" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
> To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/9d6b38bd-c540-4d7e-aad0-33ebc1057de3%40Spark.
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "tlaplus" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/tlaplus/lgEKB9GBUxc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
> To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CAEPrOmLAZapKg3Nhs%3DhTBBAK7xvOGznfC1And7o2v%3D9dUpPtsw%40mail.gmail.com.
>
> --
> You received this message because you are subscribed to the Google Groups "tlaplus" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
> To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CABW84KxsSaMQR-SHfWipAYa3pbW8BJBv6NRBV2u11XEqxsMLYw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/8D77A93C-8757-4ED8-BBF8-EF0421A5192B%40gmail.com.