[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[tlaplus] Re: Strategies for scalable modeling of append-only logs



(My previous reply was completely broken by Markdown to HTML conversion. Lesson learned. I'm resending this as a fix.)

I decided to model check the extremely contrived example in my previous message and compare the state space and model check time compared to no log compression.

In case anyone’s interested, here are the results:

No log compression:

Len(log) < ? represents the state constraint.

| Len(log) < ? | Distinct States | Time (4 workers & 10 runs) |
|--------------|-----------------|----------------------------|
|           11 |          19,233 |          1.305 s ± 0.222 s |
|          
12 |          48,526 |          1.793 s ± 0.319 s |
|
           13 |         121,217 |          2.516 s ± 0.382 s |
|
           14 |         305,152 |          2.830 s ± 0.658 s |
|
           15 |         763,353 |          4.365 s ± 0.772 s |
|
           16 |       1,918,966 |          7.745 s ± 0.664 s |
|
           17 |       4,804,797 |         17.935 s ± 1.278 s |
|
           18 |      12,067,896 |         44.357 s ± 3.356 s |
|
           19 |      30,233,757 |        114.005 s ± 7.446 s |

Compressing the log by removing the longest “redundant” log suffix:

| Len(log) < ? | Distinct States | Time (4 workers & 10 runs) |
|--------------|-----------------|-----------------------------|
|           11 |          16,319 |           2.877 s ± 0.425 s |
|
           12 |          38,356 |           3.571 s ± 0.571 s |
|
           13 |          89,795 |           4.971 s ± 0.837 s |
|
           14 |         209,188 |           9.074 s ± 1.174 s |
|
           15 |         487,065 |          17.080 s ± 0.918 s |
|
           16 |       1,132,250 |          35.743 s ± 1.882 s |
|
           17 |       2,633,623 |         63.383 s ± 32.838 s |
|
           18 |       6,122,816 |    144s (4 workers & 1 run) |
|           19 |      14,234,901 |   6m46s (4 workers & 1 run)
|

Differences by compressing

Len(log) < ? | State space difference* | Time difference** |
|--------------|-------------------------|-------------------|
|           11 |           15.2% smaller |     2.205x slower |
|
           12 |           21.0% smaller |     1.992x slower |
|           13 |           25.9% smaller |     1.976x slower |
|           14 |           31.4% smaller |     3.206x slower |
|           15 |           36.2% smaller |     3.913x slower |
|
           16 |           41.0% smaller |     4.615x slower |
|
           17 |           45.2% smaller |     3.535x slower |
|
           18 |           49.3% smaller |     3.246x slower |
|          
19 |           52.9% smaller |     3.561x slower  |

*: 1 - (states_compressed/states_not_compressed)
**: time_compressed_avg/time_not_compressed_avg

This very contrived example with compression removed, on average, 35.3% of the state space, but it took, on average, 3.13 times longer to model check.

Removing log from the view and model checking with Constraint == Len(log) < 19 took less than a second, which is over 114x faster than including a log (maybe 1140x faster?)

(I didn’t test adding snapshots, though… I guess I’d need a different contrived example for that, hahah)

Best,
Jones



On Monday, 23 October 2023 at 16:23:04 UTC-3 Jones Martins wrote:

I decided to model check the extremely contrived example in my previous message and compare the state space and model check time compared to no log compression.

In case anyone’s interested, here are the results:

No log compression

Len(log) < ? represents the state constraint.

Len(log) < ? Distinct States Time (4 workers & 10 runs) 11 19,233 1.305 s ± 0.222 s 12 48,526 1.793 s ± 0.319 s 13 121,217 2.516 s ± 0.382 s 14 305,152 2.830 s ± 0.658 s 15 763,353 4.365 s ± 0.772 s 16 1,918,966 7.745 s ± 0.664 s 17 4,804,797 17.935 s ± 1.278 s 18 12,067,896 44.357 s ± 3.356 s 19 30,233,757 114.005 s ± 7.446 s Compressing the log by removing the longest “redundant” log suffix: Len(log) < ? Distinct States Time (4 workers & 10 runs) 11 16,319 2.877 s ± 0.425 s 12 38,356 3.571 s ± 0.571 s 13 89,795 4.971 s ± 0.837 s 14 209,188 9.074 s ± 1.174 s 15 487,065 17.080 s ± 0.918 s 16 1,132,250 35.743 s ± 1.882 s 17 2,633,623 63.383 s ± 32.838 s 18 6,122,816 144s (4 workers & 1 run) 19 14,234,901 6m46s (4 workers & 1 run) Differences by compressing Len(log) < ? State space difference (1 - (states_compressed/states_not_compressed)) Time difference (time_compressed_avg/time_not_compressed_avg) 11 15.2% smaller 2.205x slower 12 21% smaller 1.992x slower 13 25.9% smaller 1.976x slower 14 31.4% smaller 3.206x slower 15 36.2% smaller 3.913x slower 16 41.0% smaller 4.615x slower 17 45.2% smaller 3.535x slower 18 49.3% smaller 3.246x slower 19 52.9% smaller 3.561x slower

This very contrived example with compression removed, on average, 35.3% of the state space, but it took, on average, 3.13 times longer to model check.

Removing log from the view and model checking with Constraint == Len(log) < 19 took less than a second, which is over 114x faster than including a log (maybe 1140x faster?)

(I didn’t test adding snapshots, though… I guess I’d need a different contrived example for that, hahah)

Best,
Jones

On Saturday, 21 October 2023 at 10:41:13 UTC-3 Andrew Helwer wrote:
Actually this comment inspired me to think of another possible solution! Which also matches how most of these systems are implemented. We want two things:
  1. Avoiding factorial state space blowup due to path dependence
  2. Finite non-restricted state space, so liveness checking is possible
A naive implementation of a replicated log has infinite state space, or at least requires state restrictions that rule out liveness checking since replicated logs grow indefinitely. But usually replicated logs are implemented in the real world by a combination of logs and snapshots. So every N additions to the log, a snapshot is taken of the outward system state. This means when a new node comes online it can rehydrate from the latest snapshot, and then replay the last < N log elements to catch up to the frontier; much faster than replaying every transaction from the beginning of time!

This suggests a possible spec design: if the replicated log reaches size N, instead of adding a state restriction, the outward system state gets dumped to a snapshot variable and the log is emptied. This is the same as if the Init formula specifies an empty log with a snapshot variable as an arbitrary valid system state. Thus the state space becomes finite and, with a small enough N, the path dependence becomes less of an issue.

I would say the main drawback of this design is that it limits how much different nodes can lag in terms of what log elements they know about. Often this is where a lot of interesting behavior manifests itself, when some node knows about log element N but another node has only reached N - 5 or something. However, perhaps the rule of three will save us: if this interesting behavior doesn't manifest with a log of size 3 (or at least a fairly small log size), it is unlikely to manifest in a larger log size. Maybe. Depending on the system.

Using VIEW to exclude the log seems possible but I worry it adds nondeterminism to the model check process. In the A/B vs B/A example in my original post, only one of those paths (whichever is encountered first, which is nondeterministic) would be added to the next state queue with log intact; the other would collide with its state hash and be discarded. So when that next state is evaluated, if the spec logic somehow depends on the history of the log, then there will be some model check executions where perhaps some states are not explored. I can't think of a real example off the top of my head but it seems possible.

Andrew

On Friday, October 20, 2023 at 4:54:58 PM UTC-4 Jones Martins wrote:

Hi, Andrew

I have no experience modeling append-only logs, but this scaling problem comes up when adding history variables to a spec, right?

Even though you’re modeling an append-only log, the solution for scalability, while retaining some log in the state space, might be removing event sequences that leave the state unchanged from the log. One could achieve this “solution” by optimizing the state space inside the spec itself.

Since we’re dealing with a sequence of events, symmetry sets wouldn’t really help, but memoization and data compression might (?).

The “solution” (in many quotes) from a spec:

Some spec has a log variable that represents the append-only log. Every action that updates log would check whether there’s an equivalent, but smaller, event sequence, thus compressing the log and the state space. This optimization would reduce the state space, reduce overall memory consumption but, even in parallel (thanks to TLC), might add substantial processing cost. It’s not a lossless compression, either.

Let’s say we had states <<s, log1>>, <<s, log2>>, <<s, log3>>. We know log1, log2 and log3 are equivalent to some log log4, and log4 is shorter than the other 3.
A spec that optimizes for log size would only have one state: <<s, log4>>.
A spec that applies a view <<s>> would also have only one state <<s>>, but with complete information loss.

One way to achieve this compression is by checking equivalent suffixes.
Let’s say some spec has variables log and x (an integer). Possible events are {+1, -1, div2}:

  • Event +1 adds 1 to x;
  • Event -1 subtracts 1 from x;
  • Event div2 divides x by two;

We know a few suffixes leave the system as if nothing had happened before (except time passing): { <<+1, -1>>, <<-1, +1>>, <<+1, +1, div2, -1>> }.

Suppose we’re in a state where log = <<+1>>. Event -1 happens, so we add it to the log: <<+1, -1>>. But this suffix is marked as redundant: the log would be transformed into <<>>.
Suppose we’re in a state where log = <<-1, -1, -1>>'. Event+1’ happens, so we add it to the log: <<-1, -1, -1, +1>>. But this suffix is marked as redundant: the log would be transformed into <<-1, -1>>.

The community module SequencesExt contains a IsSuffix(_) operator. Every action that changes the log would be written like AddOne:

EquivalentSuffixes == { <<"+1", "-1">>, <<"-1", "+1">>, <<"+1", "-1", "div2", "-1">> } AppendToLog(log, event) == LET newLog == log \o <<event>> IN IF \E suffix \in EquivalentSuffixes: IsSuffix(suffix, newLog) THEN log ELSE newLog AddOne == /\ x' = x + 1 /\ log' = AppendToLog(log, "+1")

By compressing log, the state space would contain a subset of possible logs, which might be a problem…

I haven’t tested any of this, by the way, but I’m curious to know if there are any other solutions.

Best,
Jones

On Thursday, 19 October 2023 at 10:42:42 UTC-3 Andrew Helwer wrote:
Since TLA+ sees use in distributed systems, it often has to be used to model append-only logs. However, logs of this sort place a path dependence on states; if event A happens before event B, and it leads to the same outward system state as if event B happened before event A, TLC will nonetheless have these as two separate states because their order was recorded in the log. This means the state tree scales in size factorially and greatly limits the viability of modeling systems beyond two, perhaps three nodes and a small depth.

I've run into this problem a few times and have never come up with a decent way of handling it. Perhaps it's possible to write your spec so the append-only log is analogous to the clock in real-time specs (ignored & excluded from the model checker view). What strategies have others developed?

Andrew

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/95287ca6-5ebe-4e3b-b05c-c8d413e90218n%40googlegroups.com.