https://github.com/tlaplus/tlaplus/issues/413 has the problem (and solution) related to the "eXtreme Modeling in Practice" paper [1]. In a nutshell, the bottleneck wasn’t parsing but semantic processing of TLA+ records that span some 60k LOC. The Farsite spec [2] should be a more holistic benchmark. Markus
-- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx. To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/DE4DFBF0-11A3-46BE-8D71-42AFFC8C80AF%40lemmster.de. |