Forgot your password?
typodupeerror
Math Science

The End of Mathematical Proofs by Humans? 549

Posted by timothy
from the somehow-I-doubt-it dept.
vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
This discussion has been archived. No new comments can be posted.

The End of Mathematical Proofs by Humans?

Comments Filter:
  • Science by AI (Score:2, Insightful)

    by MaDeR (826021) on Wednesday April 06, 2005 @04:30AM (#12151955)
    I think that in far future all science will be done by AI, because knowledge will be too complicated and complex to understand for even most inteligent human on chemical boost/genetic engineered/stuffed with chips.
  • by Suhas (232056) on Wednesday April 06, 2005 @04:31AM (#12151958)
    ...From TFA if a computer is used to make this reduction, then the number of small, obvious steps can be in the hundreds of thousands--impractical even for the most diligent mathematician to check by hand. Critics of computer-aided proof claim that this impracticability means that such proofs are inherently flawed.

    So basically what they are saying is that if the proof is too long to be checked, then it is flawed? WTF?
  • Creativity (Score:5, Insightful)

    by Daxx_61 (828017) on Wednesday April 06, 2005 @04:33AM (#12151961) Homepage
    Much of mathematics isn't just grunt power, there is also a lot of creative work going on there. Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

    I for one welcome our new robotic theorum proving overlords.
  • by John Allsup (987) <`moc.liamg' `ta' `euqsilahc.s'> on Wednesday April 06, 2005 @04:34AM (#12151965) Homepage Journal
    Short, sweet, beautiful proofs of interesting and useful theorems, I would welcome them to do so with open arms.

    As a tool to produce vast quantities of precise logical porridge quickly, computers have no equal in today's world, yet that is not what real mathematical proofs should be about.

    Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.
  • by Anonymous Coward on Wednesday April 06, 2005 @04:35AM (#12151969)
    Simply prove the program correct; and all its outputs will be correct?

    To do that.... well, just make sure the program was designed by a correct computer.

  • by damieng (230610) on Wednesday April 06, 2005 @04:39AM (#12151981) Homepage Journal
    Yes, thats right.

    If you can't independently examine and verify your "proof" then how can it be considered proof of anything?
  • by wwahammy (765566) on Wednesday April 06, 2005 @04:40AM (#12151982)
    Flawed in the sense it can't be peer reviewed to be "proven." It could be true but because it can't be independently verified then its not a proof because you can't prove it. Now whether this is truly a situation where a proof is unprovable or just people reacting to the thought of their profession being eliminating by technology is another debate entirely.
  • Uh, I doubt it. (Score:1, Insightful)

    by Anonymous Coward on Wednesday April 06, 2005 @04:40AM (#12151983)
    Since you're going to need a human to write the proof of correctness for the program that wrote the other proofs, aren't you now?

    Anyway, the entire point of a proof is to be read by humans. This isn't like, say, a computer program, where the point of the proof. A mathematical proof is something humans create in order to convince other humans of things beyond all doubt. The four color theorem proof was pretty closely watched; I guarantee you if you just go up to someone and say "statement [i]blah[/i] because my expert system verifies it to be the case" there is going to be some fucking doubt still there.
  • by Dancin_Santa (265275) <DancinSanta@gmail.com> on Wednesday April 06, 2005 @04:41AM (#12151989) Journal
    Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.

    So you're saying that even a theoretical sufficiently advanced computer would be unable to match the logic and creativity of a human being? I think a simple brute force counter written in Mathematica (unlimited integer lengths) whose output was executed by a CPU would prove you wrong.

    Computers can separate wheat from chaff. That's what AI is all about. No, seamless human interaction is still a ways off, but as for number crunching, the ability to compare large, disparate sets, and logical "thinking", computers are vastly more well-suited to the task than humans in every respect.
  • by irexe (567524) on Wednesday April 06, 2005 @04:43AM (#12151991)

    Computer proofs, like the graph color proof, are not proofs that are completely generated by a computer. The computer is merely used to brute force a fairly large number of 'special' cases which together account for all cases. The construction of the proofing method is and will remain human work, lest we create AI that matches our own I.In short, they are computer aided proofs only.

    Further and more importantly, at this point we do not have and are not likely to have a machine that can prove any provable theorem (and fyi, not all truths in mathematics are provable!).

  • by Anonymous Coward on Wednesday April 06, 2005 @04:44AM (#12151998)
    A proof that can't be checked is definitely flawed.
  • by Mattb90 (666532) <matt@allaboutgames.co.uk> on Wednesday April 06, 2005 @04:45AM (#12152001) Homepage Journal
    During my preparation for my interview to study Mathematics at Cambridge last year, one of the discussion topics that came up was computer-aided proof. Between the interview-experienced teacher, a school colleague who was also applying and myself, we came to the idea that computer-aided proof might not be proof at all, because proof for Mathematicans is the ability to reproduce the workings by yourself - it might take a very long time, but the idea should be that a human could dervive them in order for them to be considered true proofs of human concepts. Whether many in the Mathematical community in England take this view, I won't know until the end of the year when I hope to start my course - but based on this debate, it wouldn't surprise me if quite a few did.
  • by Freaky Spook (811861) on Wednesday April 06, 2005 @04:45AM (#12152002)
    I remember how much I hated learning alegebra, trig, calculus etc & how much the theory sucked, I never saw any point to it & loved it when I discovered my TI-83 could do pretty much everything.

    Although I discovered easier ways to do the arithmatic, I still knew the underlying theory of the equations & what the numbers were actually doing, not just what a computer was telling me.

    Students should learn this, they are the basic building blocks of a science that dictates pretty much everything on this planet & although they won't have a use for everything they are taught they will have enough knowledge to "problem solve" which is what most of high school maths is designed to do, it trains our brains to think logically & be able to work out complex problems.

    How are people going to be able to further phsyics, medicine, biology if they get into their respective tertiary courses without understanding the basic principals of all science & have to learn it all over again??

    Or what about when computers just won't work & things have to be done by hand??

    Its fair to integrate comuters into maths but not at the expense of the theory that makes us understand how things work, we should not put all our faith in technology just because its the easy thing to do.
  • Re:Science by AI (Score:5, Insightful)

    by Zork the Almighty (599344) on Wednesday April 06, 2005 @04:46AM (#12152009) Journal
    Well at least in regards to math, I stongly doubt that this will ever be the case. Mathematics is developed over decades and centuries. With a few notable exceptions, it doesn't just fall out of the sky in textbook form. Most areas of math started out as a giagantic mess (ex; calculus, linear algebra, even geometry), and it has taken the work of countless researchers, authors, and teachers to distill and refine it. This process will continue, and it is inevitable that the subjects which baffle us today will be hammered out and taught to grade school students eventually. Well developed theory makes mathematics easier, and this in turn fuels new discoveries.
  • by Hideyoshi (551241) on Wednesday April 06, 2005 @04:48AM (#12152018)
    No, they didn't "prove" any such thing - you've been reading too much Roger Penrose if you think so. There's absolutely no evidence that human minds have magical access to truths which formal systems don't; there once was a time when it was thought "obvious" that parallel lines could never meet, and it's still hard for many people to believe that there is no such thing as universal time.
  • Stupid post name (Score:1, Insightful)

    by Anonymous Coward on Wednesday April 06, 2005 @04:59AM (#12152058)
    Who the hell came with this stupid name for the post?

    The article is about computer-aided proof, where the human makes the essential proof decisions while letting to the computer the tasks of properly formulating and performing very simple, yet laborious steps (which would otherwise draw the attention of the human from essential stuff).

    You **cannot** actually prove a really complex theorem in a fully automated way, because the reasoning is non-monotonic, which implies huge computational complexities. Machines are not yet good at being creative.
  • by MonMotha (514624) on Wednesday April 06, 2005 @05:08AM (#12152083)
    A mathematical proof is nothing but a manner of convincing someone (other mathematicians) that what you assert is indeed correct, given the following presumably known, accepted as true, or otherwise previously proven "things". The whole point of a proof is that someone else knows wtf it is talking about.

    I could assert that 2+2=4. You may believe me, but have I really proven it to you? Not yet, so you don't need to believe me. If I instead say that the the cardinality of the union of two disjoint sets, each of cardinality 2, is 4, then I've (sort of) showed you that 2+2=4, assuming you accept my definitions of disjoint sets, set union, and set cardinality (which presumably I've proven elsewhere, or taken from some other source). Now do you believe me that 2+2=4?

    I could assert anything. You may or may not know if it's true. A proof is just something to back up my assertion and convince you that I'm right. Hence, if a proof is unintelligable, it's pretty darn worthless.
  • Re:Creativity (Score:5, Insightful)

    by Zork the Almighty (599344) on Wednesday April 06, 2005 @05:10AM (#12152092) Journal
    Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

    To some extent that's true, except in areas where human understanding has reduced mathematical proof to a mechanical process. For example, verifying algebraic identities, or even geometric proofs. A more advanced example is the Risch algorithm for elementary integration. It amounts to a proof that an integral either is or is not expressible in terms of elementary functions. Eventually we come to understand an area to such an extent that we can implement mechanical algorithms and move on. The proper role of the computer is to carry out these algorithms, so that we can use them to discover something else.
  • by geekpuppySEA (724733) on Wednesday April 06, 2005 @05:11AM (#12152096) Journal
    However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

    Because if there's one thing that humans are better at than computers, it's performing large numbers of repeated steps. Flawlessly.

  • by Fry-kun (619632) on Wednesday April 06, 2005 @05:18AM (#12152115)
    To the opponents of computer-aided proof (with their hard-to-check argument), I would say this:
    It's easy to check a proof. It's hard to come up with a proof. Computers are great at checking proofs - all the program needs to do is verify whether the steps are logically correct or there's a discrepancy. Coming up with a proof, on the other hand, is a very hard task (being NP-complete, unless defined in a certain way) and thus usually requires a human (or sometimes, a lot of humans) to work on the problem.
    A computer would not be able to come up with new principles of mathematics in order to tackle a given problem, it would only try to use every trick that has been discovered to the point of creation of the program (of course that doesn't have to be the case, but my point is that human intervention would be required to "teach" the computer about the new concept so that it would try to use it for the proof)
    That is not to say that computers are useless in proofs. Obviously, they're often used as assistants in proving something-or-other, but there's also a direction in computer science where your computer would take a program that you wrote in a certain manner, and prove certain properties about it, e.g. that it is not possible to get out of array bounds in your C program...
    *yawn*
    time to sleep
  • by Zork the Almighty (599344) on Wednesday April 06, 2005 @05:18AM (#12152116) Journal
    Whether or not something is a proof is entirely our distinction to make. We choose the axioms on which the proof is based. To paraphrase Bill Klem (a famous umpire): when asked whether a pitch was a ball or a strike, "It isn't anything until I call it".
  • by sien (35268) on Wednesday April 06, 2005 @05:22AM (#12152125) Homepage
    Umm. So does this mean right wingers can't do maths?

    Secondly, the claim that a magazine that opposes the death penalty and supports gay marriage is right-wing rag (which presumably you meant in US terms, is kinda amusing.

    The Economist, correctly stated, is a liberal magazine. It supports liberal economics and liberal social policy. Unfortunately the word 'liberal' in the US has been badly distorted.

  • by Anonymous Coward on Wednesday April 06, 2005 @05:31AM (#12152153)
    If by left-wing you mean "gun-toting, Europe-hating, and neoliberal", then I agree with you.
  • by ratta (760424) on Wednesday April 06, 2005 @05:34AM (#12152163)
    I was surprised to find on ./ a title such as "The End of Mathematical Proofs by Humans?", but after reading the article i realized that the computer is only beeing used to speed-up calculations and the ideas are still born in the analogic human brain. I have always used computer programs in math to solve equation, to probe combinatorial game theory problems, etc... When you find a way to make the computer "invent" proofs, just let me know, i'll call it AI.
  • by Sloppy (14984) * on Wednesday April 06, 2005 @05:50AM (#12152212) Homepage Journal
    Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.
    Always be weaker? Some day, I think you'll eat those words.

    Getting a program working, is the first step. Making it optimize its output, is the next. I think you're going to see better proofs from computers.

  • by mcc (14761) <amcclure@purdue.edu> on Wednesday April 06, 2005 @05:53AM (#12152224) Homepage
    One could argue that if a proof ever contains repetitive elements, then this is a bad thing and you want to be adding some sort of abstraction to the proof anyway.

    Anyway, the problem isn't the ability for the computer to perform flawlessly, the problem is in our ability to accurately specify to the computer what we want it to do. It's the whole "fast working idiot" thing, mechanical perfection isn't much good if we wind up just directing the computer to perfectly, flawlessly do the wrong thing very quickly. We have enough trouble convincing ourselves real-world software is going to do what we wanted it to after it compiles; and in that case we at least have the advantage we can run it and test it to see if it does what we expect. With software-generated proofs, not so much, since the program IS the test.

    I think computer aided logic can be useful if we just think of a proof-generating software program as a funny, mechanically verifiable sort of abstraction, but you find yourself making an argument that rests on assuming that a computer program you wrote does what you think it does then this is problematic.
  • by poopdeville (841677) on Wednesday April 06, 2005 @05:53AM (#12152225)
    This is dumb. Godel showed that there are non-standard models of arithmetic. This means that there are distinct, non-isomorphic sets of objects which satisfy the axioms of Peano arithmetic. There's no talk of formal systems in his proof.

    Turing applied Godel's work to Turing machines and showed that there are non-recursive functions. No talk of formal systems here.

    I have no idea why you're mentioning Cohen, since his most famous work had to do with showing that there are models in which the Continuum hypothesis fails and models in which the Axiom of Choice fails. Not related to formal systems at all. Just Zermelo-Fraenkel set theory.

    All of this is tangential. Assuming a generous reading of "formal system" as "formal logic, your alleged "limitations" to formal systems come down to a simple corollary of Godel's completeness theorem: Given a fixed set of sentences S, there is a proof of A from S iff A is true in every model of S.
  • Re:Science by AI (Score:2, Insightful)

    by Poltras (680608) on Wednesday April 06, 2005 @06:07AM (#12152268) Homepage
    I think that even if we are taught the solutions in highschool, some proofs in the future might not be provable by humans.

    We'll have to agree to the result and learn it, without knowing if it is true. Some proof will become impossibly large and complex, and with it will bring theorems that are more complex but still understandable... yet further proofs will be based on those theorem, and so on.

    So yes, I think the theorems proven this way will be taught in highschool, but not the proofs in theirselves. After all, everyone has a limit, we just need to find the global limit to humankind, which comes closer and closer with the use of the computer.

  • by maxwell demon (590494) on Wednesday April 06, 2005 @06:17AM (#12152294) Journal
    Of course then you still have to proof that your program for proof checking is correct.
  • Re:Science by AI (Score:5, Insightful)

    by rookworm (822550) <horace7945@@@yahoo...ca> on Wednesday April 06, 2005 @06:30AM (#12152328)
    I concur. Math will always be about insight. The best math is simple and shows why the result is true. Most mathemeticians are unsatisfied by the four-colour proof because it does not satisfy these two conditions. Even if computers are eventually able to discover such proofs, mathematicians will still have to ask the computers to search for them. We must remember that problems like solving certain differential equations used to be difficult and involved, but now thanks to computers, we don't have to worry about them as much. The same will apply for very specialized results. The big theorems will still be up to humans to prove. Think of computer- assisted math as a kind of spellchecker or Googe suggest. Computers replacing mathematicians completely is about as far-off as computers replacing poets or historians.
  • by PaschalNee (451912) <pnee@toombe[ ].com ['ola' in gap]> on Wednesday April 06, 2005 @06:33AM (#12152334) Homepage
    For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken
  • by gorrepati (866378) on Wednesday April 06, 2005 @06:43AM (#12152365) Homepage
    So basically what they are saying is that if the proof is too long to be checked, then it is flawed? WTF? Nope, that is not what they mean. If the proof cannot be verified, there is no gaurantee that the proof is correct
  • by g1t>>v (121036) on Wednesday April 06, 2005 @07:43AM (#12152563) Homepage
    In general, it is impossible to prove that such a 'formal logic validator' is correct since it is not possible to prove that an axiom system is correct inside that axiom system (one of Goedel's theorems). So if you would find a proof that your validator is correct, you'd have used reasoning techniques outside the logic of your validator, and do you believe those? (If so, why didn't you include them in the validator, since now your validator clearly does not support a reasoning technique you considered valid in the first place!)

    Basically, at a certain point, you just have to "believe" that your axioms, logic, whatever you call it, is consistent. Because to prove it, you'd again need axioms, a logic, etcetera, ad infinitum.
  • by T Rat (79448) on Wednesday April 06, 2005 @08:10AM (#12152660)

    Perhaps the most difficult part in my own work as a mathematician is understanding an area well enough to create meaningful, interesting conjectures. In many cases, the proof falls out immediately. This obviously isn't universally true (e.g. Fermat's Last Theorem), but the mathematicians that I have the most respect for are the ones who are able to see the connections between seemingly unrelated areas and ask the right questions to pull it all together.

    As other have said, computing may be able to verify an existing proof, but I think the real creativity and beauty in mathematics is often found in the initial formation of the conjecture.

  • by grumbel (592662) <grumbel@gmx.de> on Wednesday April 06, 2005 @08:14AM (#12152685) Homepage
    Why? I mean you don't prove that the human verifing the proof is 'correct' either. And well, I for one think that humans make quite a lot more random errors then computers. So I don't see why you have to proof the computer, but not the human.

    One should of course ensure that the programm is correct and all as good as possible, but I don't see much difference between a proof verified by a bunch of independently written computer programms and a proof verified by a bunch of humans.

  • by TuringTest (533084) on Wednesday April 06, 2005 @08:35AM (#12152792) Journal
    Current AI doesn't show creativity at all. In theorem proving, 90% of the required creativity is already present in the stated goal (the theorem), the other 10% is the heuristics programmed to guide the automatic search.
  • by joss (1346) on Wednesday April 06, 2005 @08:40AM (#12152830) Homepage
    First off, Moore's law may not hold out for another 30 years, let alone 60.

    Secondly, how fast does software progress ? Suppose we all had computers 60 billion times faster than we do now. What would we do with them ? run SWING based java applications with tolerable responsiveness, play solitaire faster, run doom 5... [although the frame rate might be a bit low] ok... great,

    Intelligence and computing power are orthogonal concepts: suppose you communicated with aliens who were a 100 light years away, would they be less intelligent because it too 200 years to get an answer. Anything you can do on todays supercomputers, you can do on pocket calculator [with enought memory].. it just takes longer.

    Lastly, in the long run, computers wont outgrow our brains, they will be integrated with our brains.
  • by hey! (33014) on Wednesday April 06, 2005 @09:15AM (#12153030) Homepage Journal
    don't produce proofs. They produce insights that make proofs feasible.

    An insight that enabled a five page proof of Fermat's Last Theorem using nothing more than high school algebra would be a major result, even though much more complex proofs exist.
  • Re:Science by AI (Score:3, Insightful)

    by mdwh2 (535323) on Wednesday April 06, 2005 @09:35AM (#12153184) Journal
    It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false

    Surely you mean it's impossible to create a turing machine that will determine if all expressions are true or false - ie, that there will always exist an expression that cannot be proved or disproved? I don't see how this prevents a computer proving or disproving a statement where such a proof exists.

    Also I don't see how a turing machine is different to a human. We can't prove whether the continuum hypothesis is true or not in ZFC, that doesn't mean that we can't prove things at all.
  • Metaphor (Score:5, Insightful)

    by Lifewish (724999) on Wednesday April 06, 2005 @09:40AM (#12153209) Homepage Journal
    As a maths degree student I can confirm that a very large portion of mathematics is devoted to finding new metaphors and angles of attack for a given situation.

    This takes a ridiculous amount of pattern recognition skill (which is one area where computers tend to be outperformed by all comers) and the ability to find new ways to abstract data. A computer could possibly come up with an idea like more-than-3-dimensional space on its own, but I'd be very surprised if even the best one could think of something like topology or tensors on its own.

    Production of unusual metaphors for things we thought we knew is a major driving force for the most important mathematical developments. It's not something I can see computers managing at any time in the near future.
  • Learn this! (Score:3, Insightful)

    by Jagasian (129329) on Wednesday April 06, 2005 @09:55AM (#12153348)
    Anybody involved in computers needs to learn this one important fact: AI fanatics time and again over promise and over sell their trade. Every few years, you get another AI snake oil salesman trying to claim that the turning point is just around the corner. But people that are actually educated in the problems at hand, those who have actually seen what AI has promised and can do, realize that it is all hype!

    Proof generation can only be partially automated. It still requires massive human intervention, from choosing what to prove and then how to go about proving it. When you have computers automatically generating a proof of, say, Godel's Incompleteness theorem, it is actually a computer that was fed an intricately encoded version of of the theorem, along with some form of hint as to how to go about proving the theorem.

    Not to mention that there is little use in a computer re-proving something that has already definitively been proven.

    The difficulty arrises from the fact that there are absolute limitations to what computers can do. These limitations have been proven many decades ago... BEFORE THE CREATION OF ELECTRONIC COMPUTERS! But to the ignorant, these theoretical limitations do not matter... simply because they do not understand how absolute they are.

    So again, learn this, you will time and again hear AI snake oil salesman spouting off crap about automating mathematical research and automating programming, etc. (Both are in a sense equally difficult as proofs can be seen as programs and visa versa.) In reality, at best, automated theorem proving is actually a tool that can be used to help mathematicians and computer scientists to do what they have been doing already. The level of automation varies, but in general is very low.

    A human still has to drive the entire process, similar to how a car automates walking, but doesn't automate where to go and how to get there.
  • Re:Science by AI (Score:5, Insightful)

    by tgibbs (83782) on Wednesday April 06, 2005 @10:45AM (#12153967)
    It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false (see Incompleteness theorem [wikipedia.org] for details).

    This actually is more about the limitations of logic than the limitations of computers. Indeed, Godel's Incompleteness Theorem has nothing to do with computers--it is a proof that in any system of logic (that meets some very broad criteria) there must exist statements that are true but that cannot be derived from the postulates of the system by any sequence of logical steps. Adding additional axioms does not solve this; there always remain unprovable propositions. This limitation applies to proofs by humans as well as proofs computers. However, the fact that there are some theorems that cannot be proved does not mean that there are not many others that can be.

    However, the fact that there are some truths that are literally inaccessible from the postulates certainly suggests that there may be others that are accessible only by a very large number of steps, effectively requiring computers. I wonder if anybody has ever attempted to prove this?
  • Re:Science by AI (Score:3, Insightful)

    by FuzzyDaddy (584528) on Wednesday April 06, 2005 @11:23AM (#12154421) Journal
    The point of the undecidability theorem is that mathematics is not merely a set a formal rules. Pre-Godel, there was a move (for example in the Principia Mathematica) to formalize the proof process and reduce mathematics to a mechanical exercise. Godel showed that a human intuition of what the symbols meant was meaningful.

    In the case of the continuum hypothesis, mathematicians are hoping to come up with and axiom that "sounds true" and makes sense that will settle the question. This is Godel's legacy - that we can think about whether an axiom is "right", and that mathematics is no purely an excercise in manipulating symbols by fixed rules.

    That's the difference between humans and today's computers. We can't prove the continuum hypothesis with the existing axioms either, but perhaps we can find a "good" axiom that will resolve the question.

    That being said, a great deal of mathematics is manipulating symbols by fixed rules. Originally we talked about "computations" - manipulating numbers. Then we moved on to algebraic equation solvers and other symbol manipulation (a la Mathematica or MathCAD), and have now extended it to the range of mathematical proof. This is an important change in mathematics, but not the end human participation in it.

  • by PDAllen (709106) on Wednesday April 06, 2005 @11:46AM (#12154721)
    A computer can prove any theorem, if you give it long enough.

    Reason: a theorem is a statement which has a proof. A proof is a finite sequence of logical statements. You can order all sequences of logical statements by lexicographic order within the sum of the number of variables used and the number of statements; then simply have a computer run through all sequences of logical statements in that order, checking each one to see whether the statements in the sequence are either axioms or follow by the rules of logic from the axioms, and checking whether the final statement is the statement you want to have proved. The computer will eventually reach a proof of whatever theorem you asked it to prove. Note that if you asked it to prove a statement which is not a theorem, it will never stop. Goedel's theorem tells us that there are some statements where not only will the computer fail to prove the statement true, it will also never prove the statement false.

    The problem with trying to do this in practice is that for any interesting problem the shortest proof might not fit in a telephone directory - and the computer would be trying to find it by checking through all sequences of that size. It would take far too long.

    The theoretical problem you refer to is whether one can write a program which will correctly classify all statements as true or false - this is not possible; some statements are neither. If you allow it to classify as 'neither' as well, it is still not possible - the sketch above will never classify any statement as 'neither' because with such statements it never halts, for example. Turing proved that no program can correctly classify all statements.
  • Re:Science by AI (Score:2, Insightful)

    by Cyberax (705495) on Wednesday April 06, 2005 @11:50AM (#12154785)
    Yet we can prove that CH is _unprovable_ in ZFC, and this proof requires a whole new theory (theory of "forcing"). And it's impossible to automate the search for a new theory.

    Also, there is no general agorithm for proof searching: we can't just enumerate all true predicates in a given and we can't (generaly) determine if two predicates are equal.

    So computer is nowhere close to substitute a human.
  • by danila (69889) on Wednesday April 06, 2005 @01:15PM (#12156029) Homepage
    Your belief in the POWER OF THE HUMAN BRAIN is really funny. In reality you don't "truly knowing what your doing", no human does. It's just that we have evolved neat modules that "pretend" to know what we are doing. There are millions of people with serious brain damage. There are hundreds of millions of people with minor brain damage. There are billions with poorly working brains. When we think we know what we are doing, we are just executing our complex survival programs.

    Read The Man Who Mistook His Wife For A Hat [amazon.com] or one of the many other amuzing (and amazing) books about our imperfect minds. If there can be a guy (apparently self-conscious, intelligent and normal), who can't tell the difference between his life-long female companion and a hat, how can you be sure that a particual mathematician can tell the difference between a correct step and a marginally and deceitfully incorrect? The answer - you can't. To err is human. Human verification is not inherently better than computer verification and in the future (as computers improve) it will gradually become worse.
  • by BritneySP2 (870776) on Wednesday April 06, 2005 @02:11PM (#12156776)
    long, boring, unclear proofs

    I do not suggest publishing only formalized proofs, or teaching mathemetics using formalized proofs. But if you have a "library" of formalized theorems (similar to having a library of subroutines), then it should be possible to translate your proof to a formal text that is not long, boring or unclear. And, in the end, you could add your new proof to the library!

    I believe that in time it will become the common practice, to publish both an "informal" and a formal proof.

    The realization of the striking similarity between proofs and algorithms will eventually make the complete formalization a natural part of the mathematical culture.

  • by Anonymous Coward on Wednesday April 06, 2005 @02:29PM (#12156978)
    If all you need is a really small probability of being wrong, you may not need a proof. For example, if I wanted to show that (x+y)^2 - (x-y)^2 = 4xy, I could randomly choose say 10^8 different pairs of integers (x,y) and verify equality for every pair.

    A proof, OTOH, will show you (to a high degree of probability, since humans are fallible) not only that they are equal, but also *why* they are equal. *That* is the importance of proof!

    Re fallibility: About 1/3 of the mathematical papers that I've refereed have contained a fatal flaw. Just because someone asserts that they have a proof doesn't make it so.
  • Re:Science by AI (Score:3, Insightful)

    by hawkfish (8978) on Wednesday April 06, 2005 @02:49PM (#12157183) Homepage
    Who's to say that neurons operate in the same way as a computer's multiple-add operations?
    Indeed. [kurzweilai.net]
  • by pfafrich (647460) <<gro.frusgnis> <ta> <hcir>> on Wednesday April 06, 2005 @03:23PM (#12157684) Homepage
    There is a journal called experimental mathematics which is devoted to using computers to do experiments in mathematics. This was mainly in the field of geometry and Geometry Center ww.geom.umn.edu was a center for this kind of work. Theres also a growing field of mathematical visulisation.

    As an example of this technique was investigating the local clasification of maps from the plane to the plane, which I took part in at Liverpool Uni. Here we were looking for special cases in a four parameter family of maps. I.e. there were four numbers you could change and each set of parameters gave a different mapping. Using visulisation programs on a computer I managed to find a special case which had not been found before. When I showed the pictures to the rest of the dept no one beleived them as they seemed to contradict the theory. Was is a bug in the software, a misleading piciture or a bug in the theory?

    In the end the experimental pictures proved to be correct and the theory was improved to account for it, involving lots of pen an paper mathematical work.

    This sort of investigation is a long way from the grand formal proofs in the article. But perhaps more indicative of the way that computers can be used to inform mathematics. More like the experimental physics/theoretical physics devide. In this case is generation of a counter example.

    One of the big problems with using computers is infinity. Most interesting problems in mathematics involve an infinite number of cases. Fermat: a^n+b^n=c^n four countably infinite numbers (integers). In the map case above it was four real numbers (uncountable infinities) Furthermore theory had proved that the underlying problem (all posible maps), which involved a countably infinite number of parameters which could all take an uncountable infinite number of values, could be reduced to just four parameters. Even with Moore's law its going to be hard to check all those cases. In the four colour theorem there is an infinite number of pattern to considered, the real smart bit in the theorem was showing that these could be reduced to just a finite number of special cases. In the clasification of simple groups we have several infinite of groups and a few (26) special cases.

    Infinity will also show its head in formal methods. More to the point combinitoric complexity. As the number of symbols in the formal sequence increases so the numeber of cases you need to consider grows rapidily, hints of NP-hard problems here and testing for primeness. This makes thing hard when you start with a few axioms and try to find all the logical consequences.

    Personally I don't have a problem with either of the proofs mentioned in the article. Both follow the same pattern, 1) reduce an infinite number of cases to a finite number of cases, 2) then check those using a computer. I can check the mathematics of 1) and write my own program to check 2). Hopefully the code's open sourced as well. Doesn't mean their particularly satisfying proof, or particually elegant, 10,000 special cases does not really flip my mathematical switch. Classification of simple groups does have a more elagant result, and deeper consequeces. Wilke's proof for Fermat, and the more interesting underlying conjecture, though long, is much more elagant, and more to the point much deeper opening up a whole new field.

    End of the day, its another one of the typical stuff we get in the press. Nothing much new here. This debate on four colour theorem's computer proof has been around for a good while now, and mathematics has for the most part gone on as normal.

  • by mesterha (110796) <mesterha.cs@rutgers@edu> on Wednesday April 06, 2005 @03:43PM (#12157950) Homepage

    (and fyi, not all truths in mathematics are provable!)

    This is wrong. It is based on a misunderstanding of Godel's proof that has been popularized by various authors.

    All truths in mathematics are provable. However, assuming our system of mathematical axioms is consistent, there are some statements that are neither true or false. These statements have not been determined by the axioms. Furthermore, for any computable set of mathematical axioms we choose, there are always some statements which are undecided. Godel proved this by coming up with a technique that always generated one of these undecided statements.

  • Insight (Score:1, Insightful)

    by Anonymous Coward on Wednesday April 06, 2005 @04:53PM (#12158800)
    > Mathematical proofs should show short, clever ways

    Proofs give correctness. Good proofs give insight.

    Insight is what drives math forward; a clever and insightful proof is vastly better than a merely correct one.

If a listener nods his head when you're explaining your program, wake him up.

Working...