When one of my models took what I thought is an inordinate amount of time to model-check, I've decided to profile the TLC run (with a good, honest, profiler), and the result was surprising (to me at least):
Over 99.5% of the time is spent creating permutation states (I obtained similar result when I profiled the initial state computation, doInit). My model has 3 permutation states, two of size 3 and one of size 6, so if I gain a reduction of ~50x in the number of states TLC has to visit, it's taking (well) over 200x of it back due to the TLC implementation, in particular, the various implementations of permute() in Value's subclasses.
Getting rid of all symmetries, increased the performance (very) significantly, and yields what seems to be a healthier profile:
Later, I found a sweet-spot of a single symmetry set of size 3 which seemed to yield the best performance.
Is there a way to improve the implementation(s) of permute()? A quick glance at the code showed what may possibly be way too many allocations (in tuples and record) due to what seems like an overly eager allocation policy (e.g. arrays are allocated before even a single permuted element is found). But as all that's required of the permutations is a fingerprint, perhaps a more substantial optimization may be possible.