[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [tlaplus] C0 and TLAPLUS
On 09/06/2016 11:42 AM, 'fl' via tlaplus wrote:
>> I had a quick look at C0 and it looks definitely easier to verify than
>> pure C (e.g. there is no typecasting). If I understand correctly, the
>> difference to Pluscal is that C0 has explicit memory management, an I/O
>> library and contracts.
> There is no memory management, malloc, free and so on. It is obvious that
> Pfenning has removed all the C features that make the things harder to
> be proved.
> His goal is to teach algorithms and how to prove prove them. I think he
> has done
> a very interesting work
It seems I scanned the pages too quickly -- there exists an alloc_array
for arrays and alloc for pointers (see page 9 of the language
reference), but there is indeed no free (but garbage collection). This
puts C0 definitely closer to Java, but the direction in which C1 shows
that one language goal is definitely to lead a student to understand the
hard parts of C.
But then C0 addresses are still a data-type which can be compared (and
in C1 also computed with) which means that for reasoning over data, we
must take the memory address into account. Apart from understanding the
algorithm better, this explicit representation of memory still presents
a challenge to verification. I'm pretty sure can already see a
difference when you introduce a reference/dereference function (from
Datatype -> Address \subset Nat) in TLA and compare the performance of
TLC when you model sorting via a Sequence and explicit memory.
You can also see the added load on a model checker / theorem prover in
the C-style strings. A correct program needs to ensure 0-termination -
which also introduces the invariant that a string may not contain the
0-character. That's stuff you'd like to abstract from, when you model
messages between actors.
>> I'm not sure what role you envision for C0 in combination to TLA
> I agree with Leslie Lamport's philosophy that programming languages
> is not the right level to design an algorithm.
>> the verification process - in particular you need to prove that memory
>> management is correct
> There is no memory management.
>> Insofar I'd prefer to teach a purely functional language like Haskell,
>> which is closer to logic (e.g. that it doesn't have assignment or the
>> way list comprehensions resemble mathematical set comprehensions) and
>> better suited to reasoning on parallelism.
> I think you can't avoid teaching one imperative language. At least
> because it gives you a feel of how the machine works. It is not the case
> of functional languages.
I agree - luckily with languages like Scala the paradigms are
converging. Reasoning over Scala is still hard, but it gives programmers
the chance to have some pure building blocks they can work with. My hope
is that the impure parts will become even smaller over time such that
reasoning capabilities on actual code improve.
> And I wonder if TLAPLUS is well suited to functional programming.
> I know that Leslie says that it is possible. I'm dubious.
What works similar in functional programming is that pure functional
programming enforces an explicit state (often called an accumulator). A
state update is then a function from old state to new state. That's
exactly what you need to do in logic too.
>> At least I have the feeling
>> that the gap between C0 and Pluscal is esier to understand than that
>> between Pluscal and TLA.
> Yes that's sure.