>>... serializability is an inherently weird spec--as evidenced by the fact>>that an implementation that can predict the future can return a value for>>a read that hasn't yet been written.
"I now regard [specifying database serializability] as a bad problem. The usual notion of serializability, which requires only that the values returned by a transaction be consistent with some serial execution of the transactions, is too weak. For example, it permits an implementation to throw away all values written by any completed transaction that only writes and does not read. The resulting execution is serializable, producing the same results as one in which all such write-only transactions are done last. A more realistic notion of serializability requires that if one transaction commits before another begins, then the first transaction appears before the second in the serialization order. ...
"The more realistic form of serializability can be specified assertionally without using a variable to record the entire execution history. "
For the specification of database serializability that I wrote  (and to which Ernie is referring in his comments on my paper "Why Amazon Chose TLA+"), I did define serializability as a property of a history variable that recorded the entire execution history. I don't know how I'd have done it without the history variable, as:
1. The specification was actually for serializability of transactions over multi-version objects. (Multi-version concurrency control is now very common in transaction systems.) Serializability of multi-version objects is more complicated than serializability of non-versioned objects.
2. The property is fairly subtle to express. To get it right, I took the formulation from a textbook (actually 2 different sources; I verified their equivalence). The formulations in the textbooks are stated in terms of the execution history.
>>you have to be able to specify one system by a more abstract system
I would very much like to be able to do this. However, no industry spec that I'm aware of has been written as an abstraction/refinement. In all the industry specs that I know, correctness has been expressed as a bunch of properties. Even Pamela Zave's work on finding flaws in Chord  follows the same pattern. I'd like to do better, but I think much more education is needed in this area.
I did once try a small experiment with abstraction/refinement of an industry spec, just to try out the basic mechanics of expressing the relationship in TLA+ (module instantiation with substitution) and using TLC to check refinement. I found the approach to be elegant and enticing, but my attempt was (knowingly) a worthless cheat, as my refinement mapping used an auxiliary variable. I couldn't find the real refinement mapping in the short amount of time I tried, because I didn't understand the algorithm well enough. The algorithm was a rather subtle distributed algorithm. It was also designed by another engineer, not me. However, even for the algorithms that I've designed, I'm not sure that I'd be able to find refinement mappings in a reasonable amount of work. (I can't find inductive invariants for my algorithms in a reasonable amount of work, and the two seem to be related.) Also, from my current position of ignorance it seems like like some creativity may be required in designing the 'right' abstract system, in order to make it easier to show refinement.
If you plan to write more instruction in this area, it would be very welcome.
Note that while you can specify strict serializability in TLA, it
requires use of temporal existential quantification, which you won't
be able to check with TLC unless you can provide a suitable witness.
I'm puzzled by this. If you don't want to limit what you can express
to simple properties, you have to be able to specify one system by a
more abstract system. And writing a philosophically correct spec of
that form requires variable hiding, which is represented in TLA by
temporal existential quantification. So it's not just
serializability; you can't even specify a FIFO queue in TLA without
temporal quantification. Clearly this isn't a practical issue, and
surely Ernie realizes this.My guess is that Ernie thinks serializability is a special case, and
it requires a spec that's quite unlike a FIFO queue spec. He is both
wrong and right about this. There is a simple way to specify
serializability in TLA+ whose state involves only a collection of FIFO
queues (one for each process) and a central memory. And it has simple
fairness conditions on its action. But he's right that although in
practice it's not fundamentally different from a FIFO queue spec, it's
fundamentally different from the FIFO queue spec because it's not
machine closed. That difference arises because serializability is an
inherently weird spec--as evidenced by the fact that an implementation
that can predict the future can return a value for a read that hasn't
yet been written. It could allow weird behaviors in circumstances in
which it would be possible to infer some future behavior by a process.
Anyway, the spec is in the Lazy Caching paper:Leslie
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
Visit this group at http://groups.google.com/group/tlaplus.
For more options, visit https://groups.google.com/d/optout.