A '''proof optimizer''' can generally be considered an optimizer for automated proofs. Such a tool could be used to optimize TypeChecking, StaticAssert, ConstraintLogicProgramming, DeclarativeMetaprogramming, '''code optimizers''' (which must prove that the transform result has the same programming effects as the pre-transform), and even '''parsing''' (which is essentially a proof that text obeys a set of rules). To implement AutomatedTheoremProver''''''s we typically utilize '''heuristics''' to help select search paths over others. These heuristics can be considered domain-specific proof optimizers of sorts. While they don't adapt effectively to conditions outside the domain they were designed for, they essentially reduce the amount of wasted search time and help one get to a proof quickly. Heuristics can be wrong, and this is okay - obtaining fast searches is a matter of probabilities, reducing the search space... because when heuristics fail, the proof can always fall back to a more brute-force solution. ------------- '''Repetition in Proofs''' One thing to note is that most AutomatedTheoremProver''''''s (type checkers, constraint logic provers, parsers, etc.) end up running the same or similar proofs over and over and over again. Type checking and parsing is proven repeatedly in an edit-compile-test cycle. This repetitive phenomenon suggests we would do well to have proof optimizers that ''adapt'' to the code or other content being proven, learning and remembering previous proofs such that small variations in the content result in a blazing-fast "I already know this!" solution that costs only a slight variation in the proof. Essentially, one can cut out most of the 'failed' search paths that occur when repeating proofs over and over. This leads to the: ------------- '''Adaptive ProofOptimizer''' A thought of mine with the following train: * you can easily cache proof-trees and search-trees from previous edit-compile-test cycles and use them to direct re-proofs after each edit-compile-test cycle (so avoiding much need to perform wasted searches). * given a large collection of AbstractSyntaxTree''''''s and such proof-trees, you can utilize ExpertSystem automated training techniques to create heuristics for 'best search paths' (i.e. based on which ones lead to successful proofs). One could also train the same AI or ExpertSystem to recognize what seem to be failing or successful search paths, thus allowing confidence levels to be computed at any point during the proof. * After building an AI or ExpertSystem it could also offer suggestions to the prover essentially aimed at avoiding and escaping fruitless trains of thought (those that don't seem likely to lead to a proof) and locating and staying on useful trains of thought (those that do seem likely to lead to a proof). Occasional misses are okay because even being right 20% of the time, or being unsure and reducing it to five search paths, would likely lead to a dramatic improvement in proof speeds over brute-force methods, and could easily be combined with other heuristics to improve confidence in AI guesses. * Such an AI could continue to learn because it would have access to instantaneous feedback about its own failures and successes from the algorithm. Even better, it would 'learn' the proofs for the specific projects you're working on, and thus take over the job of 'caching proof trees' while being a great deal more flexible about small variations in the code). ------------- '''Biological ProofOptimizer''' A programmer can also help direct a failing proof, especially when they wrote the code. With a good IDE they ought to feel able to step in and help out whenever a proof is taking a noticeable amount of time. (Of course, the proof can continue in the background, too, unless the programmer stops it.) They can make optimization suggestions, or help out with a StaticAssert, etc. Under today's conditions, such a feature would be something of a mistake: the effort would be forgot on the next edit-compile-test cycle. But with an Adaptive ProofOptimizer that remembers and applies your suggestions in future runs, it isn't such a problem. As an example, if the above StaticAssert example was failing to prove that 'get_range::start >= 0', the programmer might suggest ''try proving get_range::start == main::range_start''. This could be proven more easily, and then would serve towards a proof that get_range::start >= 0 because range_start >= 0. Related, if the proof is failing because the code is bad, then programmers can edit the code while doing so helping the proof along. This is an idea that could lead to a very excellent IDE component (is there a N''''''ewIdeWishList?) because one would be able to whip code into working condition and observe proof failures even as editing is ongoing. --------------- '''Adaption Distribution''' Supposing your IDE has learned your code and benefited from your suggestions and now can parse, typecheck, and optimize a huge amount of code in a tiny little bit of time, it's time to distribute that code. On the other machine, the compilation grinds to a halt... Not Good. When code written with a heavy amount of internal proofs is to be compiled on other people's machines, the effort that went into 'training' the AI to perform these proofs rapidly should be capable of being distilled into a smallish file that is distributed along with the rest of the code. It should be considered a critical part of the project. It is critical that the AIs on remote machines be able to complete the proof without advice and be as fast or nearly as fast as on the original build. Alternatively (or additionally) one could work in something more of a ''distributed'' programming environment that naturally shares such training with other active compilations and executions - e.g. via some sort of distributed WikiIde. Certainly, over time the quality of heuristics and the number of specific corner-case helpers will be polished and improved between automated operations and suggestions by smart programmers, and so these 'generic' heuristics should be distributed with the compiler. ---- Related: Recent (20080815) article about ProofEngine competition: http://tech.slashdot.org/tech/08/08/13/2028229.shtml