I decided to model check the extremely contrived example in my previous message and compare the state space and model check time compared to no log compression.
In case anyone’s interested, here are the results:
represents the state constraint.
This very contrived example with compression removed, on average, 35.3% of the state space, but it took, on average, 3.13 times longer to model check.
Removingfrom the view and model checking with took less than a second, which is over 114x faster than including a log (maybe 1140x faster?)
(I didn’t test adding snapshots, though… I guess I’d need a different contrived example for that, hahah)
JonesOn Saturday, 21 October 2023 at 10:41:13 UTC-3 Andrew Helwer wrote:Actually this comment inspired me to think of another possible solution! Which also matches how most of these systems are implemented. We want two things:
- Avoiding factorial state space blowup due to path dependence
- Finite non-restricted state space, so liveness checking is possibleA naive implementation of a replicated log has infinite state space, or at least requires state restrictions that rule out liveness checking since replicated logs grow indefinitely. But usually replicated logs are implemented in the real world by a combination of logs and snapshots. So every N additions to the log, a snapshot is taken of the outward system state. This means when a new node comes online it can rehydrate from the latest snapshot, and then replay the last < N log elements to catch up to the frontier; much faster than replaying every transaction from the beginning of time!This suggests a possible spec design: if the replicated log reaches size N, instead of adding a state restriction, the outward system state gets dumped to a snapshot variable and the log is emptied. This is the same as if the Init formula specifies an empty log with a snapshot variable as an arbitrary valid system state. Thus the state space becomes finite and, with a small enough N, the path dependence becomes less of an issue.I would say the main drawback of this design is that it limits how much different nodes can lag in terms of what log elements they know about. Often this is where a lot of interesting behavior manifests itself, when some node knows about log element N but another node has only reached N - 5 or something. However, perhaps the rule of three will save us: if this interesting behavior doesn't manifest with a log of size 3 (or at least a fairly small log size), it is unlikely to manifest in a larger log size. Maybe. Depending on the system.Using VIEW to exclude the log seems possible but I worry it adds nondeterminism to the model check process. In the A/B vs B/A example in my original post, only one of those paths (whichever is encountered first, which is nondeterministic) would be added to the next state queue with log intact; the other would collide with its state hash and be discarded. So when that next state is evaluated, if the spec logic somehow depends on the history of the log, then there will be some model check executions where perhaps some states are not explored. I can't think of a real example off the top of my head but it seems possible.AndrewOn Friday, October 20, 2023 at 4:54:58 PM UTC-4 Jones Martins wrote:
I have no experience modeling append-only logs, but this scaling problem comes up when adding history variables to a spec, right?
Even though you’re modeling an append-only log, the solution for scalability, while retaining some log in the state space, might be removing event sequences that leave the state unchanged from the log. One could achieve this “solution” by optimizing the state space inside the spec itself.
Since we’re dealing with a sequence of events, symmetry sets wouldn’t really help, but memoization and data compression might (?).
The “solution” (in many quotes) from a spec:
Some spec has avariable that represents the append-only log. Every action that updates would check whether there’s an equivalent, but smaller, event sequence, thus compressing the log and the state space. This optimization would reduce the state space, reduce overall memory consumption but, even in parallel (thanks to TLC), might add substantial processing cost. It’s not a lossless compression, either.
Let’s say we had states, , . We know , and are equivalent to some log , and is shorter than the other 3.
A spec that optimizes for log size would only have one state: .
A spec that applies a view would also have only one state , but with complete information loss.
One way to achieve this compression is by checking equivalent suffixes.
Let’s say some spec has variables and (an integer). Possible events are :
- Eventadds 1 to ;
- Eventsubtracts 1 from ;
- Eventdivides by two;
We know a few suffixes leave the system as if nothing had happened before (except time passing):.
Suppose we’re in a state where. Event happens, so we add it to the log: . But this suffix is marked as redundant: the log would be transformed into .
Suppose we’re in a state where +1’ happens, so we add it to the log: . But this suffix is marked as redundant: the log would be transformed into .
The community modulecontains a operator. Every action that changes the would be written like :
By compressing, the state space would contain a subset of possible logs, which might be a problem…
I haven’t tested any of this, by the way, but I’m curious to know if there are any other solutions.
JonesOn Thursday, 19 October 2023 at 10:42:42 UTC-3 Andrew Helwer wrote:Since TLA+ sees use in distributed systems, it often has to be used to model append-only logs. However, logs of this sort place a path dependence on states; if event A happens before event B, and it leads to the same outward system state as if event B happened before event A, TLC will nonetheless have these as two separate states because their order was recorded in the log. This means the state tree scales in size factorially and greatly limits the viability of modeling systems beyond two, perhaps three nodes and a small depth.I've run into this problem a few times and have never come up with a decent way of handling it. Perhaps it's possible to write your spec so the append-only log is analogous to the clock in real-time specs (ignored & excluded from the model checker view). What strategies have others developed?Andrew