Dear all:
I am specifying a distributed system consists of a cluster of nodes.
Each node's service state will be syncronized to the whole cluster periodically.
In the implementtation layer, there is a cycle timer in each node and generate syncronization message periodically.
What really confused me is how to specify this mechanism abstractly.
It is easy to simulate the timeout method under the pre-condition that the node is in working state, but the result is that the state machine will always driven by the timeout mechanism and never end.
This scenario is somewhat like the so called stuttering state if the system had been in convergeny and without other valid trigger.
So, what is the principle to get rid of the stuttering cycle timer's influence in buliding the model?
Best Regards