I have a specification using a style called "behavioral programming" intended just for this kind of specification, as an interaction of rules: https://pron.github.io/files/TicTacToe.pdf
On Tuesday, April 30, 2019 at 3:10:51 PM UTC+1, Jay Parlar wrote:A group of folks at work came to me with an interesting question:They have a "rule based expert system" (see http://sce2.umkc.edu/csee/leeyu/class/CS560/Lecture/lect-rule1.pdf as an example). A piece of data comes into the system, and it's pipelined/filtered through potentially hundreds of matching and prioritized rules, coming out the other side as possibly modified by the actions in the rules.
Because there are so many rules, it's often a problem when a new rule is proposed for the system. It's hard for any human to know all the rules that are present, so it's hard to know if a new rule will "conflict" with an existing rule. And if so, for which conditions would it conflict?
The question was: Would it be possible to translate the rules into TLA+, and have TLC check to see if the addition of a new rule would conflict with existing rules?
I have zero background with expert systems, and I haven't yet come across a general solution to this. When the literature mentions "conflict resolution", it appears to be runtime-based, when the expert system finds two rules with matching conditions for a particular piece of data. I'd like to use TLC to do that beforehand, to explore the state space of potential inputs and see if any would cause conflicts when a new rule is added. I had one theory about doing this via refinement, but it quickly fell apart when reasoning about a new rule introducing "desired changes" vs "undesired/unexpected changes".
Is this a ridiculous thought? Has anyone tried something like this, or does someone with knowledge system expertise think this is even feasible?
Thanks,
Jay P.