Let me clarify that my question had absolutely nothing to do with the practice of specification in TLA+, but merely with the philosophical value of TLA’s notion of algorithm equivalence, and was thus purely theoretical. My original thought was that in some cases, trace-equivalence may seem unsatisfactory, as it allows algorithms that are unimplementable in practice (let alone efficiently). I understand that the semantics of a logic are usually unable to talk about its syntax, but there are formalisms that do have a more intensional notion of algorithm equivalence, and they do so by syntactically (and semantically) separating the abstract levels, where nondeterminism lies, from the algorithm level. Those formalisms are much more complicated -- and, at least so far, less practical -- than TLA, but I had thought that behavior-equivalence missed something that may be of importance.
Anyway, I found Stephan’s answer, that the “problematic” specifications I gave as examples are known as “non machine-closed” to be completely satisfactory, especially combined with this note
that clearly explains the tradeoffs and why preventing non-machine-closed specification at the logic level is not what we want (it says we want machine closure only for a low-level algorithm spec, but for higher-level abstractions we want to allow non machine-closed ones). I had come across the concept of machine closure, but never understood its significance until now.
BTW, that note ends with a terrific (and apt) quote by Stephen Jay Gould which says “One cannot avoid complexity by definition”, and I was wondering where it came from; I couldn’t find the source.