On Tuesday, 15 January 2019 11:37:25 UTC-6, Markus Alexander Kuppe wrote:
On 15.01.19 08:38, Hillel Wayne wrote:
> I'm currently teaching a class of people how to use TLA+. One thing we
> ran into is that TLC has poor performance on some machines. Running
>
> |
> VARIABLE x
>
> Init== x =1
>
> Next==UNCHANGED x
>
> Spec==Init/\[][Next]_x
> |
>
>
> Will take either 0 seconds or around 10 seconds across computers. Times
> are consistent: if a computer takes 0 seconds, it takes 0 seconds on
> every run. I had everybody send me TLC logs and the only obvious
> difference I noticed is that the fast machines all had 12 cores, while
> the slow machines all had 8 cores. However, reducing the number of
> workers to 1 does not reduce the performance on the 12 core machines. So
> I don't think that's it, and I've hit a dead end.
>
Hi Hillel,
can you check if you are affected by issue #227 [1]?
I just checked with every computer and 1) the slowdowns were directly proportional to computer memory and 2) reducing the memory allocation eliminated the slowdown. So confirmed that in all cases, it was issue #227. Thank you!
H
Thanks
Markus
[1] https://github.com/tlaplus/tlaplus/issues/227