If these three formulae describe the same collection of behaviors
(this seems to me to be the case, so will assume that it is for the comments below),
then, semantically, there is no difference between them.
The meaning of a temporal formula is defined on p.315 of .
For each behavior `sigma` and each formula `F`, the semantics defines whether
the behavior satisfies the formula or not (in some cases we may be unable to
decide this. For example, if it depends on values unspecified by the
axioms of TLA+ (Sec.6.2, p.67 and pp.71--72 )).
Syntactically, the formulae are different. They are different sequences of symbols,
but this is irrelevant to what behaviors satisfy them.
If we split a formula into pieces, then it is no longer a single formula.
We create multiple new formulae, for example:
Init == x = 1 ∧ t = 1
Changes == ☐(t’ = t + 1 ∧ x’ = x + 1)
Each of these formulae describes a separate collection of behaviors,
(interpreting them separately).
If we conjoin them:
F1 == x = 1 ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ x’ = x + 1)
then we create a new formula (F1), and interpret that formula
separately. Due to how the semantics is defined for temporal formulae,
and the initial condition (again, ignoring that `Changes` is not a TLA+ formula),
it so happens that a behavior satisfies the formula `F1` if, and only if,
it satisfies the formula `Init` and it satisfies the formula `Changes`.
A behavior is a sequence of states, informally: sigma, sigma, ..., where:
\A n \in Nat: IsAState(sigma[n]) (see p.313, and p.316).
If the first state of a behavior assigns the value 1 to variable "x" and the value 1 to variable "t":
(sigma["x"] = 1) /\ (sigma["t"] = 1),
then the behavior `sigma` satisfies the formula `Init`.
If each step in a behavior satisfies the action:
(t’ = t + 1 ∧ x’ = x + 1),
then the behavior satisfies the formula `Changes`.
This requirement can be written as (again, ignoring that `Changes` is *not* a TLA+ formula):
\A n \in Nat: << sigma[n], sigma[n + 1] >>[[ "(t’ = t + 1 ∧ x’ = x + 1)" ]]
\equiv (* From the semantics of actions and state predicates defined in Sec.16.2 ,
assuming that x, t have been declared as variables. *)
\A n \in Nat: /\ sigma[n + 1]["t"] [[=]] [] [[+]] sigma[n]["t"]
/\ sigma[n + 1]["x"] [[=]] [] [[+]] sigma[n]["x"]
where [[=]] and [[+]] denote the interpretation of equality and addition within ZF (see p.292 ),
and [] is the interpretation of the constant _expression_ "1".
So, a behavior that starts with x |-> 5, t |-> 6 could satisfy `Changes`,
but does not satisfy `Init`. When we interpret the formula `F1`,
it turns out that such a behavior does not satisfy `F1`.
A behavior that starts with x |-> 1, t |-> 1, and at the second state
x |-> 10, t |-> 1, satisfies `Init`, but not `Changes`, nor `F1`.
Note that this behavior doesn't "start" at a state with x |-> 1, t |-> 1,
and later "finds out" that it violated `Changes`. It simply satisfies or
not entire temporal formulae.
So, the "algorithms" that `F1`, F2`, and `F3` describe (understood as the collections of behaviors, see [2, 3])
are the same. Also, "nondeterminism" has been given many meanings in
the literature. What happens here is that a formula can describe more than one behaviors.
For example, `Init` describes several behaviors. `F1`, too, describes several behaviors,
not just one, for reasons described below.
The "algorithms" that `Init` and `Changes` describe are different.
Without splitting the formula `F1`, it is "equivalent" to:
F2 == x ∈ Nat ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ x’ = x + 1 ∧ t = 100 ⇒ x = 100)
provided that as "equivalent" we define two formulae that describe the same collection of behaviors.
Had we agreed that "equivalent" would mean something else,
for example, something about the sequence of symbols of `F1`, compared to
the sequence of symbols of `F2`, then, yes, they wouldn't be "equivalent",
because `F1` and `F2` are different sequences of symbols.
But that is a syntactic viewpoint, not a semantic one.
A notion that involves similar observations is machine closure
(Sec.8.9.2, p.111 , Sec.3.4.2 ). The main question involved in determining
machine closure of two properties (described by two separate formulae),
a safety and a liveness property, is whether the liveness property
imposes any "safety constraint" on behaviors that satisfy the safety property.
See also Sec.8.9.3, especially paragraph 1 on p.114 .
Also, each of the three formulae describes a collection that contains infinitely many
behaviors (in TLA+, keeping in mind that these formulae are LTL-like).
The first formula:
F1 == x = 1 ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ x’ = x + 1)
is satisfied by a behavior that starts with:
x |-> 1, t |-> 1 and increments them to x |-> 2, t |-> 2, etc.
(Assuming that, for arguments that are natural numbers, the operator "+" is defined as usual.)
F1 is satisfied by any behavior that assigns values to the variables x, t as above,
and assigns arbitrary values to all other variables (Sec.2.3, p.18 ). There are infinitely many values (sets) that
a behavior can assign to a variable, so there are infinitely many behaviors that
satisfy the formula F1. Moreover, this collection of behaviors is not a set (at the semantic level, in ZF),
because it contains "too many" behaviors (Sec.6.1, pp.65-66 , see also p.311).
(The collection of behaviors described by `F1` is a (proper) class .)
On Friday, November 18, 2016 at 10:21:19 PM UTC-8, Ron Pressler wrote:
My question concerns the more theoretical aspects of TLA.
Consider the following three specifications, assuming we have `VARIABLES t, x` (for simplicity of notation, I’ll drop the stuttering invariant actions):
x = 1 ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ x’ = x + 1)
x ∈ Nat ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ x’ = x + 1 ∧ t = 100 ⇒ x = 100)
x = 1 ∧ t = 1 ∧ ☐(t’ = t + 1 ∧ ((t < 100 ∧ x’ = x - 1) ∨ x’ = x + 1) ∧ t = 100 ⇒ x = 100)
Each of these specifications admits exactly the same set of behaviors (which happens to contain just a single one). By taking the formulas apart we can see that the second is different from the first by looking at the initial condition, and we can see that the third is different from the first by examining their next-state relations. But can we tell them apart without breaking the formulas open (perhaps by conjuncting them with another formula)? Are they equivalent?
If they are, are the algorithms they describe “really” equivalent? Obviously, for the sake of abstraction-refinement relations, it makes sense to ignore internal nondeterminism, but on the other hand, we know that an algorithm that employs nondeterminism can have significantly different complexity than one that does not, so shouldn’t there be a way to tell them apart?