[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] TLC on the GPU



Let me know if I can help with this.

I don’t know enough about the TLA+ internals to know what makes sense here.

I’m an occasional TLA+ user and work for Nvidia. 

Steve Glaser
sglaser@xxxxxxxxxx
On Feb 6, 2022, 8:46 AM -0800, Jones Martins <jonesmvc@xxxxxxxxx>, wrote:
Hi everyone,

Would a GPU model-checker be useful for large models in TLA⁺? For example, when compared to multicore CPU systems, GPUs are much faster in certain scenarios. Would model-checking be one of them?
I'm assuming that, in order to model-check, one would need to bring the parser to the GPU as well, so that reading from RAM  doesn't become a bottleneck.

Best,

Jones

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/cc2b665d-9863-485e-bbac-04c652f0e92fn%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/9d6b38bd-c540-4d7e-aad0-33ebc1057de3%40Spark.