I have observed that because unit tests are written by the programmer,
they will test only for problems that the programmer has already
foreseen. Therefore, they can only test for simple coding errors, not
for whether the unit does what it really needs to do.
In the real world, the TLA+ spec engineers write mirrors the
implementation that they have in mind. So, a program unit will
usually correspond to parts of the spec, often directly implementing
an action in the TLA+ spec. Writing a unit test that checks that the
unit does correctly implement the action should be a great way to
catch errors. This fits well with the accepted idea that unit tests
should be written before one writes the code, since one should also
understand how the code implements the spec before writing the code.
This idea can be tried by engineers right now when they write their
unit tests to see how well it works. If it works as well as I think
it should, one can think about writing tools to help.
I have not managed to convince any engineers to try it, and I don't
know of its being done anywhere--except in systems for proving the
correctness of the code, which are still too expensive to use in most
On Monday, April 6, 2020 at 8:32:33 AM UTC-7, Alex Tim wrote:
I think its not a new idea constructing tests for models...
One deal is proving the correctness of a spec and quite another - checking that a real program (implementation) satisfies it.
Can someone give useful references to relevant publications?
Thanks in advance.