[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[tlaplus] Re: TLC minimize file I/O operations

Hi Antonios,

I'm “reviving” this thread because I ran into the same question regarding whether TLC's model checking was CPU-bound vs. I/O bound.

I'm running TLC on a computer with over 55 GB of free RAM (64 GB in total). After about 12 hours of model checking (and still going), it generated 37 GB of fingerprint files.

Here's the full command for reference (tla2tools rev: 3ea3222):

    -XX:+UseParallelGC \
    -Dtlc2.tool.fp.FPSet.impl=tlc2.tool.fp.OffHeapDiskFPSet \
-Dtlc2.tool.ModelChecker.BAQueue=true \
    -cp tla2tools.jar tlc2.TLC \
-cleanup \
-metadir .states \
-workers auto \
-fpmem 0.9 \
-fpbits 2 \
-lncheck final \
-modelcheck {tla_file} \
-config {config_file}

I increased fpmem to `0.9` to allow TLC to use 90% of available RAM for the fingerprint set, and increased fpbits to 2 so that TLC creates 4 files, one for each core, to decrease read/write contention. Maybe my reasoning is completely off.

By the way, I did try increasing memory usage of the JVM using `-Xms`, `-Xmx` and `MaxDirectMemorySize` and running with a smaller model, and it didn't help. I'm not very familiar with Java's memory handling, so maybe that's why.

On Tuesday 13 December 2022 at 15:21:39 UTC-3 agoug...@xxxxxxxxx wrote:
Dear all,

In the assumption that the host system has enough memory and can hold all states in RAM, is there a way to configure TLC to minimise file I/O operations? Or not write to disk at all?

Best regards,

You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/b609a367-705e-4e09-9d6c-ea6e5e56c010n%40googlegroups.com.