[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tlaplus] Scaling up model checking
On 27.08.19 10:13, dunlapg@xxxxxxxxx wrote:
> Thanks for the quick response. The default command looks something
> like this:
>
> java -cp $(TLATOOLS_JAR) -XX:+UseParallelGC -Xmx24G tlc2.TLC
> -config FOO.cfg -workers 8 -difftrace FOO.tla | tee foo.out
>
> On the bigger machines I started with `workers` at 48 and 64 (on the
> 48-thread and the 64-thread machine respectively), then tried 24, 16,
> and 8 for both. Ultimately, 8 got the highest throughput.
>
Most importantly you should switch to the concurrency-optimized
fingerprint set implementation (OffHeapDiskFPSet). Check the Toolbox's
.log file located in ~/.tlaplus/.metadata/ for "TLC COMMAND-LINE" which
shows the optimizations (you have to run TLC from the Toolbox first).
Additionally, you can extract OS optimizations from [3].
Generally, you will probably want to watch "Large Scale Model Checking
101" [1]. However, it pre-dates the TLC profiler which you should use
to inspect a smaller model for performance bottlenecks such as
inefficient expressions.
Lastly, a concurrency-optimized implementation for TLC's state queue is
currently work-in-progress [2].
Hope this helps,
[1] https://vimeo.com/264959035
[2] https://github.com/lemmy/PageQueue/
[3]
https://github.com/tlaplus/tlaplus/blob/525f664f82afa67de9b0b2af0419bae02f35e738/org.lamport.tla.toolbox.jclouds/src/org/lamport/tla/toolbox/jcloud/CloudDistributedTLCJob.java#L559-L562
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/f12f722a-1ed9-9616-384d-2bbb44117661%40lemmster.de.