[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[tlaplus] Re: Specification as part of an agile software delivery process



I realized I made two errors in the post above. I want to correct the first and most important with Dafny: my initial introduction to Dafny exercises/examples was:

https://github.com/microsoft/Ironclad

I found its content full of promise but hard extremely hard to follow: it's hard to see where examples, the libraries, and Dafny itself start or stop ... TLA+ is better documented through slides, PDFs, and Practical TLA+ ebooks etc. The official repo for Dafny is,

https://github.com/dafny-lang/dafny

And that does seem well organized. I do not want to throw a wet blanket on people's hard work so I thought I'd better make the distinction clear. I will get back to Dafny as time permits.

>Frama-C or similar is indicated for the CD part of CDCI.
Should have written CI, not CD.

Now returning to the main line of questioning about formal methods in CD/CI ... resistance primarily arises from line programmers. But consider: if I call a team meeting and say that, because we're onboarding a new client, we're putting aside our current work switching to 8 business quarters of new work over 15 JIRA Epics in two week sprints ... well, that's real work ... but nobody is going to stress out too much. After all the work is coming in as JIRA epics which are broken down into smaller stories. As professionals we know Agile is about communication and risk-reduction. Some of the features may not ultimately be needed, and some requirements may change. Attempting the work without that scaffolding would probably not be street smart. Programmers are OK here with some level of lack of complete certainty.

Now contrast: I tell the same team we're doing 8 quarters of other new work with JIRA/Epics but we're going to do formal modeling. Even through formal modeling is also about first order risk reduction ... the complaints will fly: isn't it a waste of time? the code can diverge from the spec at any time? A verified spec says nothing about the what the code does! Are we going to formally verify all the third party libraries we use too? Is that even possible? Here, line programmers suddenly become OCD about perfect predictability and the ease or feasibility with which one can refine down from the spec (JIRA "requirements") into the production code. Suddenly the comfort zone about uncertainty is real. If we can't check it all, we're not even gonna try anything! This, when it happens, is a cultural issue more than technical and it needs leadership to keep things in balance. That's why I say to me formal methods are about primary risk identification and reduction with the same ends as JIRA but different subject matter and means. 

Finally, a one-off aside: People have used services and techniques from https://aphyr.com/ (I am not associated with, talked to, or have every used them) to help debug distributed services. You may find some of that useful, again, in the area of project risk.

Regards





On Monday, November 23, 2020 at 1:16:43 PM UTC-5 recepient recepient wrote:
I am a TLA+ novice ... however ... am plowing into TLA full-force on a distributed KV store. It'll eventually include crash recovery, membership, and what not. On the other hand I've done programming for a long time. Putting the two together in a reasonable way is something I'm keeping my eye on too. As to TLA+ appropriateness to a larger workflow

- It makes one get the correctness spec first and showing it works under a wide or widest range of legal behaviors rather than fixing bugs as customers drive them to your front door often the more expensive vector .... We target the model checker at the main risk areas only. With professional programmers I wouldn't tend to lean on TLA for multi-threaded code but for distributed code ... definitely yes.

- Incorporating TLA+ into CD/CI presupposes the ability to abstract up from code to the spec or refine down from the spec to code (a two way refinement) in order to have something to re-test without excessive labor. That's beyond the scope (perhaps main intent) of TLA+. However building in pre/post/static analysis into the code ala Frama-C or similar is indicated for the CD part of CDCI. That's not model checking per se but the things Frama might check for you would fall out of the TLA+ spec, which is a plus e.g. invariants, asserts.

- I initially viewed Verdi and Daphny as doing more than TLA not because TLA can't but more so because Verdi and Daphny wanted that more granular two-way refinement ... but I can only battle one formal language at a time. Verdi is based on COQ, which is another huge learning curve. Verdi doesn't do liveness. Daphny seems much better than Verdi in tool power; Daphny does do liveness but the GIT repo with the Daphny toolbox is a complete mess ... a rat's nest of files. I don't have that kind of time. Verdi is proof-oriented; TLA+ is model checker oriented which is a different approach to the same end. Daphny I believe is model checker oriented too. On the whole then I plan on sticking with TLA+ as the most practical choice unless I have or can be given a reason to look at Daphny later.

- To the extent you can model the serverless model as one basic architecture you can check it once then you're done so to speak unless you revisit the code or the model. 

- In summary I intend to use TLA+ as removing project risk (system never becomes prod ready) and institutional risk (company X sucks 'cause their product sucks). For short-medium term risk in CD/CI I hope to bring out key invariants and asserts into code with unit testing and/or state paths from initial to terminal states lifted from TLA's .dot file or something else together with conventional unit and integration testing.

Regards  
On Monday, November 23, 2020 at 11:49:09 AM UTC-5 georgio...@xxxxxxxxx wrote:
Hello tlaplus community,

I am Georgios, I am a developer and lead consultant at Thoughtworks.
One of the new trends in the industry is building systems using a "serverless architecture".

This usually means designing systems as small Lambda functions connected with queues and API gateways that your cloud provider takes care of running for you. 

The traditional testing pyramid doesn't work very well for those systems. Partially because running many of those components locally is hard or even impossible and also because those systems tend to be highly asynchronous.

I was wondering if TLA+ can have a role to play in increasing our confidence that those systems are working as expected. Is there maybe a competitive value proposition for using TLA+ specifications with serverless. Since, testing is now harder and it is already challenging to test for bugs arising from asynchronicity?

I am curious if anyone has any experience to share with using TLA+ in an agile team. Has anyone incorporated TLA+ and the act of writing specification in their team processes, and CI/CD pipeline?


Additionally, I recently wrote a blog post to demonstrate that specifying serverless systems in PlusCal is relatively easy and will catch nasty bugs.

Happy to hear your thoughts about the post too[1].


Best regards,
Georgios Chinis

--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/f05df45c-beb8-4694-b53a-7342e8e979cbn%40googlegroups.com.