[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlaplus] Stuttering when adding

Hello Marc,

I haven't fully understood your specification yet, but here's an argument that explains why your property EnabledProcessNew may actually hold.

1. Since you set all numerical parameters to 1 and count modulo these parameters, all counters are always at 0, as are all values sent on channels. In particular, channel reordering has no visible effect, and duplication can occur only once between two subsequent buffer resets (btw, I wonder if the condition Cardinality(ch.dup) = 0 should be negated). Also, the actions Client!Discard and Server!DropPacket are never enabled, and Client!PacketLoss is a stuttering action. The actions Server!ProcessNew and Server!Retry essentially have the same effect, except that the latter is enabled only if the buffer is non-empty (NB: the use of Min in ProcessNew is dangerous in case the buffer is empty but doesn't have any actual effect in that case).

2. Whenever the "up" channel contains a value, Server!ProcessNew is enabled. (This follows from 1 above.)

3. We assume that no channel loss occurs on the "down" channel from the server to the client.

4. If we try to build a behavior that contains infinitely many visible actions but where there is an infinite suffix in which channel "up" never contains a value, the client may execute some Receives (but this is possible only finitely often since it will exhaust its input channel, and it will then stay empty for as long as the server doesn't move). After that, both channels are empty and cannot execute any visible action, so the client must do a Send, and Server!ProcessNew becomes enabled.

Note that I believe that this argument breaks down if you set NMAXTRANSACTION to 2. It should be easy enough to check using TLC (perhaps disabling a few actions to reduce the state space).

Stephan Merz

On Jun 10, 2013, at 4:47 PM, marc magrans de abril <marcmagra...@xxxxxxxxx> wrote:

After reading the toolbox dynamic Help I saw :(
>>Warning: Because of a bug in TLC, checking of liveness properties does not work in simulation mode.

That's a shame, the simulation mode is pretty useful when the state space is too big.

In any case, I still do not get why some formulas succeed even if they should be failing. Like:
EnabledProcessNew == <>[](ChNoLoss(down) )
                                    => []<>(ENABLED <<Server!ProcessNew>>_<<up,down,srvTransRcvd,srvBuffer>>) 

(That is, enabling just losses in one of the two channels for client-server communication).

Notice, that I believe that the Channel.tla spec is correct, because I have checked using ChannelSpec.tla (although I could be also wrong).

On Monday, June 10, 2013 4:19:45 PM UTC+2, marc magrans de abril wrote:

I have a spec that has a finite set of states (although large). I am trying to check some temporal formulas. In order to do so, I have to avoid infinite stuttering steps at the end of the behavior by adding in the Spec:
/\ []<><<TRUE>>_myvars 

However, still some formulas that should fail do not fail. I have been looking for an error in the spec, but O'm not able to find it.

I guess that it could possible that when I use the TLC in "model-checking mode" there is improvable collision in the hash values of the states. How likely is this?

However, when I run in "simulation mode" (with different seeds) I get an error-trace due to a temporal formula violation due to stuttering... but how is it possible? I thought I removed this behaviors by adding "/\ []<><<TRUE>>_myvars" to my Spec.

How could it be possible? 

Thanks for the help,

PS: I attach the tla spec files. The parameters being used are:

You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
Visit this group at http://groups.google.com/group/tlaplus?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.