In ProofOfCorrectness, there's a whole lot of abstract talk regarding how formal proofs might work, and why they are not infallible. To make things more concrete, this page describes a real system for formal proofing, called Coq (see http://pauillac.inria.fr/coq/). It's a system for writing formal proofs. You can use it for ordinary mathematics, but it has also special support for proofs about programming languages. The idea is that you develop the algorithm and its correctness proof ''together''. Then, you can extract the code from the proof to actually get something executable. (Currently supported are HaskellLanguage and MlLanguage). So basically, unless there are glaring bugs in Coq itself, you have a guarantee that the program is really correct. The best thing is that someone else can run your proof through Coq, and see if it checks, without having to inspect your proof "by hand". ''This is just mathematically impossible. You cannot prove that a program is correct because our current programming languages are unquantifiable.'' Of course it is possible to prove a program correct. What is ''not'' possible is to write a program that accepts all correct programs and rejects all incorrect ones. As said above, Coq relies on the programmer to provide the proof; all Coq does is to mechanically check the proof. (OK, it can do the boring parts of the proof automatically.) ---- What do you mean by "unquantifiable"? Here's a correctness proof for an absurdly simple function. Apologies to those whose sensibilities are offended by my choice of language. The original version of this omitted the ''unsigned'' in the declaration of the return value of ''fact''. unsigned int fact(unsigned int n) { unsigned int result = 1; unsigned int i; for (i=1; i<=n; ++i) result *= i; return result; } '''Lemma'''. For any ''n'', the body of the loop will be executed exactly ''n'' times, with ''i'' taking the values 1,2,...,''n''. '''Proof'''. Trivial. (Note that the increment of ''i'' cannot wrap around.) ''Wrong: The increment of ''i'' CAN wrap around. If n == UINT_MAX, then i <= n is always true. In this special case, ''i'' will wrap around again and again, and the loop will never end. How to correct this error is left as an exercise for the reader.'' '''Theorem'''. For any ''n'', the value returned from ''fact(n)'' is congruent modulo M to the factorial of ''n'', where M-1 is the largest number represented by the ''unsigned int'' type. '''Proof'''. Trivial, given the lemma and the fact that arithmetic operations on ''unsigned int'' are required by the standard to return values congruent modulo M to their "pure mathematical" values. There are some things that are pretty "unquantifiable"; for instance, anything that depends on the amount of memory you have available. Even in those cases, though, it's possible to give guarantees of the form "The program will either terminate abnormally or give the right answer", if the program is carefully enough written. Of course, giving a formal correctness proof for any practical program would be a terribly arduous undertaking. But nothing in the design of (the better-specified) existing languages makes it impossible to write provably correct programs. Am I missing something? -- GarethMcCaughan ---- The proofs are indeed ''trivial'' to an experienced programmer, but I don't see how they are distinct from operational reasoning that every programmer does (or should do), because there are no formal semantics prescribed to the statements in this language. These are also proofs by induction on n, but that is not explicitly stated. To be picky, there are programs that are both impossible to prove "correct" and impossible to find defects in. This is due to two reasons: the inherent incompleteness of any formal logic and the inability of a group of different people to agree on a single notion of "correct." I don't know if the incompleteness of logic is really that much of an issue in practice, but disagreements over what something should do are an everyday occurrence. -- JoeHendrix ---- See also ProofsCanBeSimple.