See http://www.cs.miami.edu/~tptp/OverviewOfATP.html for a quick overview of automated theorem proving (ATP). ---- Automated Theorem Proving is an area of study to get computers to prove logical and mathematical statements. One of the first tasks of ArtificialIntelligence when it first emerged. See http://www.risc.uni-linz.ac.at/research/theorem/. Not just enumerating instances of a theorem exhaustively, but applying logical deduction, induction, inference and search strategies (depth first, breadth first, best first, iterative deepening) to arrive at a solution. There are branches of Mathematics such as ModelTheory and ProofTheory which study proofs themselves. Human mathematical proofs, while much more rigorous than daily discourse tend not to be completely specified in logic and symbols. There is a lot of PlainEnglish in between. NaturalDeduction proofs in formal logic have a justification on every line, whereas in a human mathematical proof mostly key inferences are justified, and intermediate steps are assumed to be "obvious" to the reader based on their mathematical knowledge. Logic and natural language can be mapped (logic after all was derived from language discourse) so a standard mathematical proof can be translated to symbolic language and vice versa. Computers need every step specified so the easiest canonical representation for the task is symbolic logic. Note that logics themselves are diverse (propositional, predicate, 1st order, 2nd order, omega order, temporal, modal etc) but one of the most basic principles for proof is transitivity of implication (in logic: -> is "implies", ^ is "and") (a -> b ^ b -> c) -> (a -> c) where a, b, c can be complex expressions themselves, and the process can be chained and nested to produce arbitrarily long proofs. So if you want to prove a -> b, prove a0 -> a1 -> a2 ... an -> b. Regular "math" symbols can also be included, e.g., "equals" a=b ^ b=c -> a = c is an arithmetic axiom. Of course, there are many more complex proof strategies (e.g., by contradiction, counterfactual) but this is a foundation. The first rule above in English (natural language) "if a implies b and b implies c, a implies c" or "since a yields b and b gives c, if a is true it follows that c is true" or "it is obvious that c is true because b results in c and b follows from a, if a is true" etc. So logic is the least ambiguous even if more cryptic. MathematicaLanguage has a prover see http://citeseer.nj.nec.com/cache/papers/cs/8581/ftp:zSzzSzreports.adm.cs.cmu.eduzSzusrzSzanonzSz1992zSzCMU-CS-92-117.pdf/clarke93analytica.pdf For comparison with computer deduction, more detail on the mechanics of human planning (which humans need to do proofs) see http://citeseer.nj.nec.com/cache/papers/cs/26906/http:zSzzSzgral.ip.rm.cnr.itzSzbaldassarrezSz2002Bald-WorkAntiLearSyst-BiolPlauModeHumaPlan.pdf/a-biologically-plausible-model.pdf ---- The four color problem was proved by computer - see http://www.programming-reviews.com/Four_Colors_Suffice_How_the_Map_Problem_Was_Solved_0691115338.html. * ''The FCT was '''not''' proved by computer. It was proved by Haken and Appel. They proved that if a certain set of graphs existed then the theorem was true. Then they wrote a program that searched for an appropriate set of graphs. Such a set was found, hence the FCT is true. This is not the same as a computer generated proof.'' ** It '''was''' proved by computer; the ''method'' of proof was devised by Haken and Appel, but was impractical for human completion. Printed it is 4 feet high. Many older mathematicians do not accept it since deduced by computer and it is difficult to check, being so voluminous. However, no problems with it have been discovered to date. * ''Mathematicians, both older and younger, know that it was not deduced by computer. What objections they have are more subtle.'' It is speculated that Andrew Wiles' heroic proof of FermatsLastTheorem may be one of the last of its kind as computers will be able to do such proofs automatically very soon. They can traverse very large search spaces quickly. ''Actually, traditional proofs will continue to dominate many branches of mathematics. Some problems are conducive to computerized search algorithms, while others aren't.'' '''I'll bet Kasparov thought he would dominate chess for many years to come, till his historic match with DeepBlue :-)''' There is even a suite of problems (library) as test input for Automated Theorem Provers called TPTP (Thousands of Problems for automated Theorem Provers) - see http://www.cs.miami.edu/~tptp/. ---- One important and practical application of automated reasoning/automated theorem proving is the validation of digital designs. However, the collection of "Theorem Proving Haiku" at http://www.cl.cam.ac.uk/Research/HVG/haiku.html suggests that the use of proof assistants can be a somewhat frustrating occupation. ---- The MlLanguage started out as a MetaLanguage for the LCF ("Logic for Computable Functions") theorem prover developed by RobinMilner and others. It was developed to allow users to write theorem proving tactics for LCF and its successors, HOL and Isabelle (see http://www.cl.cam.ac.uk/Research/HVG/HOL/). For a list of theorem provers and reasoning assistants also see PhilipWadler's web site on FunctionalProgrammingInTheRealWorld. ---- PrologLanguage is based on automated theorem proving and can be used to create deductive theorem provers ("out of the box" it can't do so; you have to create or download programs written in it. But it provides an environment that makes it easier than other languages for that kind of programming since it is relatively declarative and logic based itself). Deductive is what you find in maths texts; "LogicProgramming" is more about reducing logical formulae to "and" and "or" canonical forms then proving the negation is false (resolution principle). ---- Theorems, Corollaries and Lemmas in proofs are analogous to functions or subroutines in programming. The same way a subroutine encapsulates more detailed logic and can be called from a second subroutine as one line, theorems allow a previously proved step to be used as one step even though it's proof may be many lines. Axioms are assumed to be true without proof, like individual statements in programming. So if p -> q has been proved say with n steps and j -> p, q -> k are axioms, to prove j -> k the proof would be (in a natural deduction format; "[]" are justifications:) Prove: {j->p,p->q,q->k} |- j->k 1 j -> p [axiom A] 2 p -> q [theorem T] 3 q -> k [axiom B] ---- 4 (j -> p) ^ (p -> q) [1,2 ^I] 5 j -> q [4 Syllogism] 6 (j -> q) ^ (q -> k) [5,3 ^I] 7 j -> k [6 Syllogism] The details of T's proof don't need to be listed (though you could but it would be redundant. T would have it's own proof listed separately). Note also the "justification path" of a proof does not need to be linear ie even though sequence is 1..7 the justifications combine arbitrary previous lines as needed. It is more of a tree or nested structure, many texts draw them that way, but the fashion above is closer to the format of regular mathematical proofs, and easier to print from a computer program. A ''Theory'' or branch of mathematics, logic can be thought of as a set of axioms and theorems with their proofs. Note that "|-" single turnstyle is a logical metasymbol; there are many distinctions that pertain to what symbols mean but is basically a higher order "->". "=>","<->", "|=" ,"<=>" have specific names and meanings, again cryptic and technical but provide a shorthand that lets a proof be more compact than writing out in words, and easier to represent to a machine for AutomatedTheoremProving. "^I" or "&I" means "^ introduction". In Natural Deduction there are introduction and elimination rules for each connective, and many other metarules and metatheorems often used subjectively (not explicit) in human mathematical proof. Note also that Programs themselves can be proved correct see http://www.cs.ucsd.edu/users/gary/130/lectures/l.proof.pdf. There is feedback and feedforward between logic, mathematics and programming: Mathematics studies proofs, proofs are used in maths, logic formalizes math and makes it more rigorous, programs can automate many aspects of maths and proofs, and proofs can verify programs. Logic is essential even for day to day programming ie '''if...then...else''' is a basic construct. However logic uses naming conventions opposite to "good variable naming" for programs ie in propositional logic you would represent "it is raining" with "R". In predicate logic (with arguments) "John went to the store" could become W(j,s) where "W" means went or going, j=John, s=store. It makes it easier though for statements to be manipulated with logical inference in the same way an engineer could think of velocity as v or force as F and solve a problem as vector geometry by manipulating symbols. So even a purely mathematical statement like : ''Let E be a subset of the real line '''R''' containing at least two points. Then E is connected if and only if E is an interval.'' Could be formalized as : '' '''forall''' E subset '''R''' (|{p:p '''elem''' E}| >= 2 -> (Connected(E) <-> Interval(E)) ) '' "Connected()" and "Interval" could even be shortened to Con(), Ivl() or evenC(),I() usually there are conventions, i.e. "differentiation" is accepted as d(y)/dx. '''subset''','''elem''', '''forall''', '''exists''' also have standard symbols (too bad can't be drawn in wiki although MathPlayer could allow this in future). Not necessarily how mathematicians work day to day but in principle any mathematical theorem and axiom can be translated this way and is needed if a computer is to do AutomatedTheoremProving. For an interesting history of "first uses" of common mathematical and logic symbols see: http://members.aol.com/jeff570/set.html BertrandRussell in his PrinciplesOfMathematics and PrincipiaMathematica tried to set out a program for deriving the foundations of mathematics from a few logical principles see: http://www.kirjasto.sci.fi/brussell.htm. It was many pages before he could prove that 2+2=4. ''Note that this doesn't mean that it necessarily takes many pages to prove that 2+2=4 with the same rigour as PrincipiaMathematica. From :'' * Volume 1 begins with a lengthy Introduction containing sections entitled "Preliminary Explanations of Ideas and Notations," "The Theory of Logical Types," and "Incomplete Symbols." ''Most of this was not needed to prove 2+2=4.'' ---- Automated theorem proving triggered the invention of the TuringMachine and Church's LambdaCalculus. AlanTuring and AlonzoChurch were trying to think of ways to answer DavidHilbert's 10th question (http://en2.wikipedia.org/wiki/Entscheidungsproblem). ---- Also note that PrincipiaMathematica was proven invalid (but still useful) by KurtGoedel's work. ---- Many Mathematicians have been motivated by the idea that mathematics could one day be automated. As far back as the 1600s GottfriedWilhelmLeibniz wanted to create "an algorithmic procedure which could be applied to decide truth of statements in this calculus" see http://www.cs.cornell.edu/Info/Projects/NuPrl/Intro/AutoDeduction/autoded.html also JohnMcCarthy originally invented LispLanguage for work on theorem proving. Both Leibniz and BlaisePascal created mechanical adding machines the most advanced computational devices of the time see http://www-jime.open.ac.uk/98/7/pascal.html for a description. They saw all sorts of tasks being mechanized they knew one day even mathematics and thought could be too. See also ArtificialIntelligence. ---- Proof General is a generic interface for proof assistants, currently based on the customizable text editor Emacs. Like a browser/editor for viewing/modifying automated proofs. See http://proofgeneral.inf.ed.ac.uk/. ---- If you would like to try out how interactive theorem proving works, read http://www.cl.cam.ac.uk/Research/HVG/Isabelle/dist/Isabelle2004/doc/tutorial.pdf "DC Proof" http://dcproof.com/ also seems to be an interactive theorem proving system. Freeware. ---- See also CoqProofAssistant, WatsonTheoremProver. ---- CategoryArtificialIntelligence