Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Science

The End of Mathematical Proofs by Humans? 549

vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
This discussion has been archived. No new comments can be posted.

The End of Mathematical Proofs by Humans?

Comments Filter:
  • Science by AI (Score:2, Insightful)

    by MaDeR ( 826021 )
    I think that in far future all science will be done by AI, because knowledge will be too complicated and complex to understand for even most inteligent human on chemical boost/genetic engineered/stuffed with chips.
    • Re:Science by AI (Score:5, Interesting)

      by FleaPlus ( 6935 ) on Wednesday April 06, 2005 @03:45AM (#12152004) Journal
      This reminds me of a Nature paper from last year:

      Functional genomic hypothesis generation and experimentation by a robot scientist [nature.com]

      The question of whether it is possible to automate the scientific process is of both great theoretical interest and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments. We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection.


      New Scientist also had an article on it: "Robot scientist outperforms humans in lab." [newscientist.com]
    • Re:Science by AI (Score:5, Insightful)

      by Zork the Almighty ( 599344 ) on Wednesday April 06, 2005 @03:46AM (#12152009) Journal
      Well at least in regards to math, I stongly doubt that this will ever be the case. Mathematics is developed over decades and centuries. With a few notable exceptions, it doesn't just fall out of the sky in textbook form. Most areas of math started out as a giagantic mess (ex; calculus, linear algebra, even geometry), and it has taken the work of countless researchers, authors, and teachers to distill and refine it. This process will continue, and it is inevitable that the subjects which baffle us today will be hammered out and taught to grade school students eventually. Well developed theory makes mathematics easier, and this in turn fuels new discoveries.
      • Re:Science by AI (Score:5, Insightful)

        by rookworm ( 822550 ) <horace7945@@@yahoo...ca> on Wednesday April 06, 2005 @05:30AM (#12152328)
        I concur. Math will always be about insight. The best math is simple and shows why the result is true. Most mathemeticians are unsatisfied by the four-colour proof because it does not satisfy these two conditions. Even if computers are eventually able to discover such proofs, mathematicians will still have to ask the computers to search for them. We must remember that problems like solving certain differential equations used to be difficult and involved, but now thanks to computers, we don't have to worry about them as much. The same will apply for very specialized results. The big theorems will still be up to humans to prove. Think of computer- assisted math as a kind of spellchecker or Googe suggest. Computers replacing mathematicians completely is about as far-off as computers replacing poets or historians.
      • Re:Science by AI (Score:5, Informative)

        by Cyberax ( 705495 ) on Wednesday April 06, 2005 @06:16AM (#12152457)
        Well, Economist should learn some REAL math. The first thing they should learn is math logic.

        It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false (see Incompleteness theorem [wikipedia.org] for details).

        For example, it's impossible to find answer to CH [wikipedia.org] (continuum hypotesis) in ZFC (Zermelo-Fraenkel + Choice axiomatics).

        In short: some problems can't be solved in existing theories, they require creating a new theories with new axioms. It's non-formalizable process (it's also proven), so no computer can do this.
        • Re:Science by AI (Score:3, Insightful)

          by mdwh2 ( 535323 )
          It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false

          Surely you mean it's impossible to create a turing machine that will determine if all expressions are true or false - ie, that there will always exist an expression that cannot be proved or disproved? I don't see how this prevents a computer proving or disproving a statement where such a proof exists.

          Also I don't see how a turing machine is different
          • Re:Science by AI (Score:3, Insightful)

            by FuzzyDaddy ( 584528 )
            The point of the undecidability theorem is that mathematics is not merely a set a formal rules. Pre-Godel, there was a move (for example in the Principia Mathematica) to formalize the proof process and reduce mathematics to a mechanical exercise. Godel showed that a human intuition of what the symbols meant was meaningful.

            In the case of the continuum hypothesis, mathematicians are hoping to come up with and axiom that "sounds true" and makes sense that will settle the question. This is Godel's legacy -

        • Re:Science by AI (Score:5, Insightful)

          by tgibbs ( 83782 ) on Wednesday April 06, 2005 @09:45AM (#12153967)
          It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false (see Incompleteness theorem [wikipedia.org] for details).

          This actually is more about the limitations of logic than the limitations of computers. Indeed, Godel's Incompleteness Theorem has nothing to do with computers--it is a proof that in any system of logic (that meets some very broad criteria) there must exist statements that are true but that cannot be derived from the postulates of the system by any sequence of logical steps. Adding additional axioms does not solve this; there always remain unprovable propositions. This limitation applies to proofs by humans as well as proofs computers. However, the fact that there are some theorems that cannot be proved does not mean that there are not many others that can be.

          However, the fact that there are some truths that are literally inaccessible from the postulates certainly suggests that there may be others that are accessible only by a very large number of steps, effectively requiring computers. I wonder if anybody has ever attempted to prove this?
          • Re:Science by AI (Score:3, Informative)

            by astar ( 203020 )
            This thread tends to confuse provability with true, a distinction that Godel clarified. Godel's result applies to any interesting formal system, of which logic is one, The result is that in any interesting formal system, some things are unproveable but we believe them to be true and some things are proveable and their negation is proveable also. So it is hard to make the further claim that any formal system can be true.

            The continuum hypothesis, due to Cantor, has been shown, partly by Godel, to be consi
      • Re:Science by AI (Score:3, Interesting)

        by mangu ( 126918 )
        it has taken the work of countless researchers, authors, and teachers to distill and refine it.

        What you and others fail to grasp is that computers are evolving rapidly, human brains aren't. Our current computers are still far from having the data processing capability of a human brain.

        In rough orders of magnitude, a human brain has 1e11 neurons, with 1e3 synapses each, doing 1e2 operations per second. Considering that a neuron can be emulated by a multiply-add operation, we would need 1e16 such operation

        • Re:Science by AI (Score:3, Informative)

          by dynamol ( 867859 )
          It is quite a large step to go from raw computing power to intteligence. Sure computers will eventually have way more processing power than we humans do..hell they do right now if you assign them to a certain task, but that is a far cry from being intellent. With that said I do side with the camp that says computers will do most mathematical proofs in the coming decades...why? Because researchers will find a way to get computers focused on this task...and as I already mentioned computers are way more pow
        • Re:Science by AI (Score:3, Interesting)

          by Sara Chan ( 138144 )

          In rough orders of magnitude, a human brain has 1e11 neurons, with 1e3 synapses each, doing 1e2 operations per second. Considering that a neuron can be emulated by a multiply-add operation, we would need 1e16 such operations per second to emulate a human brain.

          A 3 GHz Pentium can do 1e10 floating point multiply-add operations per second, so a human brain is roughly equivalent to one million desktop computers. Therefore, Moore's law tells us that we still need 30 years of progress before we have a huma

          • If this is right, then the 1e5 networked computers that are currently used by Google are a tenth of the way there.
            Tommorrow's Headline: Googlebot declares self world ruler, buys Graceland
        • by joss ( 1346 ) on Wednesday April 06, 2005 @07:40AM (#12152830) Homepage
          First off, Moore's law may not hold out for another 30 years, let alone 60.

          Secondly, how fast does software progress ? Suppose we all had computers 60 billion times faster than we do now. What would we do with them ? run SWING based java applications with tolerable responsiveness, play solitaire faster, run doom 5... [although the frame rate might be a bit low] ok... great,

          Intelligence and computing power are orthogonal concepts: suppose you communicated with aliens who were a 100 light years away, would they be less intelligent because it too 200 years to get an answer. Anything you can do on todays supercomputers, you can do on pocket calculator [with enought memory].. it just takes longer.

          Lastly, in the long run, computers wont outgrow our brains, they will be integrated with our brains.
        • On the flip side, according to my calculations, I lost e3 neurons, e1 synapses, and e0.5 operations per second to beer last weekend alone. That almost never happens to a computer.

        • Re:Science by AI (Score:4, Interesting)

          by Slime-dogg ( 120473 ) on Wednesday April 06, 2005 @07:51AM (#12152898) Journal

          Who's to say that neurons operate in the same way as a computer's multiple-add operations? Another little problem is that you'll need additional programming to tell the computer how to emulate the communication and interaction between neurons. I imagine that this would take far more processing power than we could ever achieve.

          We may be able to emulate the parts, but you can't just throw the parts together in a heap and expect it to work. The sum of the parts is far more complicated than the parts themselves.

        • ...you would also need a bunch of biologists that can tell you exactly how the brain works, and a group of computer programmers that can translate that into code. Also remember that a brain is a living organism where the neural pathways are changing. In a computer, you must either connect all-to-all, or have a computer that can rewrite its own data paths.

          Kjella
        • Metaphor (Score:5, Insightful)

          by Lifewish ( 724999 ) on Wednesday April 06, 2005 @08:40AM (#12153209) Homepage Journal
          As a maths degree student I can confirm that a very large portion of mathematics is devoted to finding new metaphors and angles of attack for a given situation.

          This takes a ridiculous amount of pattern recognition skill (which is one area where computers tend to be outperformed by all comers) and the ability to find new ways to abstract data. A computer could possibly come up with an idea like more-than-3-dimensional space on its own, but I'd be very surprised if even the best one could think of something like topology or tensors on its own.

          Production of unusual metaphors for things we thought we knew is a major driving force for the most important mathematical developments. It's not something I can see computers managing at any time in the near future.
  • by Suhas ( 232056 ) on Wednesday April 06, 2005 @03:31AM (#12151958)
    ...From TFA if a computer is used to make this reduction, then the number of small, obvious steps can be in the hundreds of thousands--impractical even for the most diligent mathematician to check by hand. Critics of computer-aided proof claim that this impracticability means that such proofs are inherently flawed.

    So basically what they are saying is that if the proof is too long to be checked, then it is flawed? WTF?
    • by damieng ( 230610 ) on Wednesday April 06, 2005 @03:39AM (#12151981) Homepage Journal
      Yes, thats right.

      If you can't independently examine and verify your "proof" then how can it be considered proof of anything?
      • by Len Budney ( 787422 ) on Wednesday April 06, 2005 @08:28AM (#12153139)

        If you can't independently examine and verify your "proof" then how can it be considered proof of anything?

        That's easy. Speaking as a PhD mathematician, there's nothing disturbing at all about these computer proofs. They're examples in which a computer was programmed to generate a perfectly standard proof, except that it's extremely long.

        Checking the proof is not hard: it suffices to verify that the program emits only correct inferences. That's nothing more than a standard human-generated proof. In addition, a verifier can be coded by someone other than the original author, to check the validity of the inferences generated by the first program. The checker's algorithm can also be verified using a standard human proof, and would be used to confirm that a bug didn't result in an incorrect proof.

        Note that Gödel's incompleteness theorem has nothing to do with these programs: they don't generate all possible proofs. They only generate one specific type of proof per program. Each program is easy to verify.

        You could call the software correctness proofs "meta-proofs", but that's just being coy. They're perfectly legitimate proofs, and they are sufficient to prove the correctness of proofs generated by the program.

    • by wwahammy ( 765566 ) on Wednesday April 06, 2005 @03:40AM (#12151982)
      Flawed in the sense it can't be peer reviewed to be "proven." It could be true but because it can't be independently verified then its not a proof because you can't prove it. Now whether this is truly a situation where a proof is unprovable or just people reacting to the thought of their profession being eliminating by technology is another debate entirely.
      • by lucason ( 795664 ) on Wednesday April 06, 2005 @04:45AM (#12152195) Homepage
        How can the review of proof generated by computer by a human be considered "peer" review?

        Why not have it verrified by other computers?
      • by Sique ( 173459 )
        It's not that easy. Fundamentally the mathematical proof consists in something else than the actual running of the program.

        The question is if the output of the program is equivalent to the problem you want to prove. Your proof actually consists of verifying that "Program Output is A" is only true if "Statement B is true" is true.

        (It's not necessary to prove also the reverse notion. "Statement B is true" doesn't need to implicate that "Program Output is A". Imagine a program that prints out "A" if it is 8p
    • If it can't be checked, it's not really a "proof" is it ? Now granted, that leaves a lot of room for leaway. For example, one could construct an algorithm for the problem and then prove that the algorithm is correct. There are other more inventive possibilities too.
    • by MonMotha ( 514624 ) on Wednesday April 06, 2005 @04:08AM (#12152083)
      A mathematical proof is nothing but a manner of convincing someone (other mathematicians) that what you assert is indeed correct, given the following presumably known, accepted as true, or otherwise previously proven "things". The whole point of a proof is that someone else knows wtf it is talking about.

      I could assert that 2+2=4. You may believe me, but have I really proven it to you? Not yet, so you don't need to believe me. If I instead say that the the cardinality of the union of two disjoint sets, each of cardinality 2, is 4, then I've (sort of) showed you that 2+2=4, assuming you accept my definitions of disjoint sets, set union, and set cardinality (which presumably I've proven elsewhere, or taken from some other source). Now do you believe me that 2+2=4?

      I could assert anything. You may or may not know if it's true. A proof is just something to back up my assertion and convince you that I'm right. Hence, if a proof is unintelligable, it's pretty darn worthless.
      • by A beautiful mind ( 821714 ) on Wednesday April 06, 2005 @05:50AM (#12152389)
        Um. A mathematical proof is just a sequence. It doesn't have to be accepted to be true. According to the axiomatic method of proving ,you just need these things:
        • An abc (not THE abc, just one)
        • formulas
        • derivation rules
        An axiom is just a derivation rule which derives from the empty set.

        Basically, a proof is, according to the axiomatic method, is just a non-infinite sequence of formulas, which can be created by the allowed derivation rules. The whole point is that a proof for A HUMAN, and mathematical proof is different. The axiomatic system is not perfect, either. The whole Hilbert [wikipedia.org]-plan is proven to be impossible to be done, thus it is not possible to prove that there are no contradictions in the axiomatic system

        I think the "MUI" axiom system is commonly used to demonstrate how it works, basically. It is too lengthy, and i'm lazy.
      • by nikitad ( 693684 ) on Wednesday April 06, 2005 @05:57AM (#12152406) Homepage
        If mathematical proofs were "nothing but a manner of convincing someone", how could it be verifiable by a computer?

        Fully formal mathematical proofs depend on nothing but ability to distinguish characters, to compare text strings, and to perform substring substitution.

        To your example (2 + 2 = 4). In formal arithmetics, based on Peano axioms, there is one primary operator, let's call it s:N -> N, and s(n) is interpreted as (n + 1). "2" is _defined_ as s(1). 3 is defined as s(2), and 4 is defined as s(3). So one has to prove that s(1) + s(1) = s(s(s(1))).

        By definition of addition (remember, addition is not fundamental notion in the formal arithmetics, it's defined in terms of s-operator), a + s(b) = s(a + b), and a + 1 = s(a), so we have

        s(s(1) + 1) = s(s(s(1))),
        s(s(s(1))) = s(s(s(1)))

        Q.E.D.

        So, where proof above depends on anything but mechanically verifiable string manipulations?

        P.S., of course mathematical formulae are not strings, but rather trees, but this doesn't change a bit.
    • by spagetti_code ( 773137 ) on Wednesday April 06, 2005 @05:24AM (#12152314)
      Many proofs have been thought to have been found, only to be proven wrong years later. And that was only after years of study my mathematicians of the time. For example, from the article: this particular puzzle was twice 'solved' only to be 'unsolved' 11 years later. Consider also Wilkes proof for fermats conjecture - it was proven wrong and had to be redone.

      So the question is - how are we going to prove/disprove a computer program that proves a theorem? Program complexity has meant that all but the most trivial programs cannot be 'proven'.

      The solution, it seems to me, is per the article: get the s/w to output a series of steps using formal logic. Any series of formal logic steps should be confirmable by a 'formal logic validator', and that is the only program we need to prove correct. That will be non-trivial, but will then open the floodgates to any hacked up piece of code to be used to generate provable logic.

      • by g1t>>v ( 121036 ) on Wednesday April 06, 2005 @06:43AM (#12152563) Homepage
        In general, it is impossible to prove that such a 'formal logic validator' is correct since it is not possible to prove that an axiom system is correct inside that axiom system (one of Goedel's theorems). So if you would find a proof that your validator is correct, you'd have used reasoning techniques outside the logic of your validator, and do you believe those? (If so, why didn't you include them in the validator, since now your validator clearly does not support a reasoning technique you considered valid in the first place!)

        Basically, at a certain point, you just have to "believe" that your axioms, logic, whatever you call it, is consistent. Because to prove it, you'd again need axioms, a logic, etcetera, ad infinitum.
        • by PDAllen ( 709106 )
          It is very easy to produce a validator which correctly classifies valid and invalid proofs written in formal logic. It is very easy to prove that the validator does what it is meant to do.

          We don't know whether this abstract formal logic thing actually has any resemblance to the real world - that is not Goedel's theorem, which assumes logic to be valid and discusses potentially interesting frameworks for mathematics. Whether formal logic 'works' is simply something you have to believe, you cannot argue it,
  • Creativity (Score:5, Insightful)

    by Daxx_61 ( 828017 ) on Wednesday April 06, 2005 @03:33AM (#12151961) Homepage
    Much of mathematics isn't just grunt power, there is also a lot of creative work going on there. Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

    I for one welcome our new robotic theorum proving overlords.
    • by R.D.Olivaw ( 826349 ) on Wednesday April 06, 2005 @03:43AM (#12151993)
      especially when we already know the answer it's looking for. 42
    • It's more like 90% pattern matching, 5% positive feedback effector systems and 5% adaptive selection of all engrams*.

      Source [bartleby.com]
      * A physical alteration thought to occur in living neural tissue in response to stimuli, posited as an explanation for memory.
      Um. Yeah. Or something like that.
    • Re:Creativity (Score:5, Insightful)

      by Zork the Almighty ( 599344 ) on Wednesday April 06, 2005 @04:10AM (#12152092) Journal
      Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

      To some extent that's true, except in areas where human understanding has reduced mathematical proof to a mechanical process. For example, verifying algebraic identities, or even geometric proofs. A more advanced example is the Risch algorithm for elementary integration. It amounts to a proof that an integral either is or is not expressible in terms of elementary functions. Eventually we come to understand an area to such an extent that we can implement mechanical algorithms and move on. The proper role of the computer is to carry out these algorithms, so that we can use them to discover something else.
  • by John Allsup ( 987 ) <slashdot@nospam.chalisque.net> on Wednesday April 06, 2005 @03:34AM (#12151965) Homepage Journal
    Short, sweet, beautiful proofs of interesting and useful theorems, I would welcome them to do so with open arms.

    As a tool to produce vast quantities of precise logical porridge quickly, computers have no equal in today's world, yet that is not what real mathematical proofs should be about.

    Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.
    • Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.

      So you're saying that even a theoretical sufficiently advanced computer would be unable to match the logic and creativity of a human being? I think a simple brute force counter written in Mathematica (unlimited integer lengths) whose output was executed by a CPU would prove you wrong.

      Computers can separate wheat from chaff. That'

    • Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. Modern mathematics is very complex, important and useful theorems proven recently have huge and difficalt proofs, built from a lot of steps. Imortant results hardly have a short proofs.
      Take a look at the concepts used in the one of the most famous proofs of XX centery : the Feramt last theorem [wikipedia.org]. It's proof followed from the Taniyama-Shimura theorem [wikipedia.org] which establish non-obvi
    • Sometimes they do. In fact, some of the earliest theorem proving programs found new, short, and even elegant proofs of old theorems. However, after that the field didn't seem to advance very much for quite a while... Perhaps there have been some big breakthroughs lately? At least that's what this discussion seems to suggest.

      Actually, I was officially a math major for about a year (before my transition to the much easier computer science). Some parts of it were pretty simple, but the proofs always threw me

  • by Anonymous Coward on Wednesday April 06, 2005 @03:35AM (#12151969)
    Simply prove the program correct; and all its outputs will be correct?

    To do that.... well, just make sure the program was designed by a correct computer.

    • by rxmd ( 205533 ) on Wednesday April 06, 2005 @04:32AM (#12152156) Homepage
      Three major obstacles with this approach (which has been tried BTW):
      • Logical problems with proofs for correctness. For example, it has been proven that no program can prove itself correct.
      • Correctness proofs are hard to do and incredibly tedious. Have you ever tried it? And no, you can't have a program do them, because you'd have to prove this program correct, which sends you right back to square #1.
      • You'd have to prove all sorts of other factors correct, including the operating system and hardware your program is running on. This leads to another set of interesting problems, including that "correct hardware" is useful only as a theoretical concept. What's a "correct computer" if there's a small probability that bits will spontaneously flip in memory, for example?
      In short: while it might seem elegant to prove the prover, then have everything else proved by this prover, this approach has little value in practice.
      • by darkov ( 261309 ) on Wednesday April 06, 2005 @05:14AM (#12152286)
        I agree with (1), I believe Godel had a hand in that one.

        With (2), the program can reduce the tedium of proving the original proof in some cases. That's what computers are good at and are better at than humans. Proving the program may well be much easier. I would think that's why the researchers in the article used computers in the first place. If you program in C++ you will have a problem, but a functional or logic language program is straight-forward to prove (PROLOG programs are essentially the execution of a proof).

        With (3) you could show by running it on different hardware and software that the platforms did not affect the result by a huge probability. If you don't like the 'probability' bit, who says there isn't a human trait or gene that causes any human to get a proof wrong? Humans are imperfect too, but if enough of them agree, and they are qualified, then we agree that what they agree on is true. This is the same situation as the potentially flawed platforms problem.
  • by carnivore302 ( 708545 ) on Wednesday April 06, 2005 @03:41AM (#12151987) Journal
    In the past, I've used the HOL Theorem Prover [anu.edu.au]. It's a nice toy to play with if want to get started in this area.
  • by Pants75 ( 708191 ) on Wednesday April 06, 2005 @03:41AM (#12151988)
    That's what my math teacher always said, back in school. I'm sure he would extend that statement to include proofs, given that they are also just math.

    What about Fermats last theorem? Fermat wrote in the margin of his note book that he had a proof, but it was too large to fit there, so he'll write it on the next page. Trouble was, the next page was missing from the book.

    The modern proof for FLT took hundreds of pages of dense math and went through some math concepts that AFAIK hadn't even been invented in Fermats time.

    What was Fermats proof (if it existed)? It would surely have been far more elegant than the modern version.

    That doesn't make the modern version wrong, just less pure, I feel.

    The problem with modern computer aided proofs is they allow the proof to become unwieldy and overly verbose, compared to what it would have to be if just a human produced it.

    Such is progress I guess.

    • A non-existent or incorrect proof can be very elegant I suppose.
    • Perhaps his proof was flawed and he realised it so he ripped the last page out?
    • by PaschalNee ( 451912 ) <pnee@toomb e o la.com> on Wednesday April 06, 2005 @05:33AM (#12152334) Homepage
      For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken
    • by kisak ( 524062 ) on Wednesday April 06, 2005 @07:11AM (#12152667) Homepage Journal
      What was Fermats proof (if it existed)? It would surely have been far more elegant than the modern version. That doesn't make the modern version wrong, just less pure, I feel.

      Mathematicians think they know what Fermat thought was the "proof" that he could not fit in the margin, since Fermat used a similar strategy for another problem. Euler was the one who used Fermat's strategy on Fermat's last theorem explicitly, and showed that it did not give a full proof as Fermat had hoped. It might be that Fermat himself tried and then gave up, or that he was happy to have "solved it" and looked for other things to prove.

      I think you (and most people) misunderstand the reason Fermat's last theorem has such a central place in math history. But first lets discuss the reason why the problem became so well known; it is because it is such an easy problem to state and to understand, still no one has been able to use "simple" math to prove it. Even Fermat himself thought the problem should be fairly straigth forward to solve, and it has made generations of people with curiosity for math look for proofs and even thinking they have found one. It is also a problem some of the greatest minds of math did not manage to solve. Fermat, Euler, Lagrange, Gauss, Abel, Riemann, etc have all had a try and did not solve it. Which math wanna-be wouldn't want to solve something this group of people did not manage?

      Now, even though this has made Fermat's problem something that has created a lot of publicity, the number one reason Fermat's problem has been important, is because of all the beautiful maths that have been discovered by mathematicians trying to solve it. Fermat's theorem in itself is not interesting. It is not like the Riemann hypothesis, which if proven to be false, will make much of modern maths not true or at least force mathematicians to look for new proofs. This is because you can prove much interesting stuff if you assume the Riemann's hypothesis is true, problem is of course, this is not yet proven. If Fermat's theorem was been shown to be not true, that would have been suprising, but would not made large parts of maths collapse.

      So, the modern proof of Fermat's theorem is the end of a long journey which has lead to some very deep mathematical knowledge, and in a way, Wiles' proof is much more interesting in its own right than that it also proofs that Fermat was right in his guess. A "simple" proof (watch out when mathematicians use the word simple or trivial) of Fermat's problem would give undeniable bragging rights, since you could say you solved a problem Gauss couldn't solve with the maths Gauss knew. But again, it probably would be more of a huge accomplishment for one person than a huge breakthrough in maths.

      A last comment; the reason Wiles' proof is long is not because math is verbose, far from it . It is because Wiles is able to connect what would seem to be two unconnected branches of mathematics, showing that problems in one of the branches can be restated as problems in the other. This is not something you do in a few pages. And the importance of it becomse clear, if you consider that what can be an unsolveable problem in the one branch of maths might be reformulated as a solveable problem in the other. Math is always about trying to find ways to solve something as simply as possible. Not something computers is very good at, so no Abel prize to Big Blue for a while I think.

  • by irexe ( 567524 ) on Wednesday April 06, 2005 @03:43AM (#12151991)

    Computer proofs, like the graph color proof, are not proofs that are completely generated by a computer. The computer is merely used to brute force a fairly large number of 'special' cases which together account for all cases. The construction of the proofing method is and will remain human work, lest we create AI that matches our own I.In short, they are computer aided proofs only.

    Further and more importantly, at this point we do not have and are not likely to have a machine that can prove any provable theorem (and fyi, not all truths in mathematics are provable!).

    • Then you'd better check your facts ... there must be dozens (if not hundreds---erhhh, speaking about checking ones facts :-) ) of systems in which mathematical theorems are routinely proved (HOL, PVS, Coq, Mizar to name just a few)!
    • For a good introduction to incompleteness of mathematical systems people should really check out Godel, Escher, Bach: An eternal golden braid [forum2.org].
      This book basically describes Godel's incompleteness theorem [miskatonic.org] in an entertaining way for a general audience.
    • (and fyi, not all truths in mathematics are provable!)

      This is wrong. It is based on a misunderstanding of Godel's proof that has been popularized by various authors.

      All truths in mathematics are provable. However, assuming our system of mathematical axioms is consistent, there are some statements that are neither true or false. These statements have not been determined by the axioms. Furthermore, for any computable set of mathematical axioms we choose, there are always some statements which are und

  • by Mattb90 ( 666532 ) <matt@allaboutgames.co.uk> on Wednesday April 06, 2005 @03:45AM (#12152001) Homepage Journal
    During my preparation for my interview to study Mathematics at Cambridge last year, one of the discussion topics that came up was computer-aided proof. Between the interview-experienced teacher, a school colleague who was also applying and myself, we came to the idea that computer-aided proof might not be proof at all, because proof for Mathematicans is the ability to reproduce the workings by yourself - it might take a very long time, but the idea should be that a human could dervive them in order for them to be considered true proofs of human concepts. Whether many in the Mathematical community in England take this view, I won't know until the end of the year when I hope to start my course - but based on this debate, it wouldn't surprise me if quite a few did.
  • by Freaky Spook ( 811861 ) on Wednesday April 06, 2005 @03:45AM (#12152002)
    I remember how much I hated learning alegebra, trig, calculus etc & how much the theory sucked, I never saw any point to it & loved it when I discovered my TI-83 could do pretty much everything.

    Although I discovered easier ways to do the arithmatic, I still knew the underlying theory of the equations & what the numbers were actually doing, not just what a computer was telling me.

    Students should learn this, they are the basic building blocks of a science that dictates pretty much everything on this planet & although they won't have a use for everything they are taught they will have enough knowledge to "problem solve" which is what most of high school maths is designed to do, it trains our brains to think logically & be able to work out complex problems.

    How are people going to be able to further phsyics, medicine, biology if they get into their respective tertiary courses without understanding the basic principals of all science & have to learn it all over again??

    Or what about when computers just won't work & things have to be done by hand??

    Its fair to integrate comuters into maths but not at the expense of the theory that makes us understand how things work, we should not put all our faith in technology just because its the easy thing to do.
  • Theorem Proving (Score:2, Interesting)

    by NitsujTPU ( 19263 )
    Theorem proving is NP Complete.

    FOL Theorem provers jump through a number of hoops to make the whole bit a little more practical, but realistically speaking, having a computer that just runs through mathematical proofs in the manner that a human does is a long way off.

    An interesting thing about the article is that the first proof was done with an FOL prover... it was a long, non-intuitive proof, but an FOL prover has performed that proof.

    The second was done with code, a human had to write the program. Th
  • Elegant proofs (Score:2, Interesting)

    I'm certainly not a very gifted mathematician, but I'd like to think my grasp on it is at least adequate to make this point: from the proofs I've seen and understood, it seems that while the shorter and more elegant proofs may not strictly be more "truthful" than complicated ugly ones, the important thing is they are so much easier to understand, verify, and explain to others. For instance, I remember being especially charmed by Cantor's diagonal argument and will not forget that particular method of reaso
  • unverifiable (Score:3, Interesting)

    by StrawberryFrog ( 67065 ) on Wednesday April 06, 2005 @03:54AM (#12152038) Homepage Journal
    However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

    Nonsense. If people can't verify proofs, then what you need to do is to verify them by computer :)

    I'm not entirely joking here - so long as the verification program is written independantly of the thoerem-proving program, and it can reliably distinguish between valid and invalid logical inferences, what's the problem? It's simple automated testing methods really.
  • Nothing too new? (Score:2, Informative)

    by xiaomonkey ( 872442 )
    Using computer programs to prove theorems in mathematics is as old as the field of artifical intelligence itself. In fact, some of the initial excitement around AI originated from the existence of such programs as Newell and Simon's Logical Theorist and it's sucessor the General Problem Solver [wikipedia.org].

    IIRC - back in the early days of AI (1960s), some people in the field thought that in relatively short order computers would be a major soucre of such mathematical proofs. It hasn't happened yet, but that doesn'
  • by geekpuppySEA ( 724733 ) on Wednesday April 06, 2005 @04:11AM (#12152096) Journal
    However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

    Because if there's one thing that humans are better at than computers, it's performing large numbers of repeated steps. Flawlessly.

    • by mcc ( 14761 ) <amcclure@purdue.edu> on Wednesday April 06, 2005 @04:53AM (#12152224) Homepage
      One could argue that if a proof ever contains repetitive elements, then this is a bad thing and you want to be adding some sort of abstraction to the proof anyway.

      Anyway, the problem isn't the ability for the computer to perform flawlessly, the problem is in our ability to accurately specify to the computer what we want it to do. It's the whole "fast working idiot" thing, mechanical perfection isn't much good if we wind up just directing the computer to perfectly, flawlessly do the wrong thing very quickly. We have enough trouble convincing ourselves real-world software is going to do what we wanted it to after it compiles; and in that case we at least have the advantage we can run it and test it to see if it does what we expect. With software-generated proofs, not so much, since the program IS the test.

      I think computer aided logic can be useful if we just think of a proof-generating software program as a funny, mechanically verifiable sort of abstraction, but you find yourself making an argument that rests on assuming that a computer program you wrote does what you think it does then this is problematic.
  • by Fry-kun ( 619632 ) on Wednesday April 06, 2005 @04:18AM (#12152115)
    To the opponents of computer-aided proof (with their hard-to-check argument), I would say this:
    It's easy to check a proof. It's hard to come up with a proof. Computers are great at checking proofs - all the program needs to do is verify whether the steps are logically correct or there's a discrepancy. Coming up with a proof, on the other hand, is a very hard task (being NP-complete, unless defined in a certain way) and thus usually requires a human (or sometimes, a lot of humans) to work on the problem.
    A computer would not be able to come up with new principles of mathematics in order to tackle a given problem, it would only try to use every trick that has been discovered to the point of creation of the program (of course that doesn't have to be the case, but my point is that human intervention would be required to "teach" the computer about the new concept so that it would try to use it for the proof)
    That is not to say that computers are useless in proofs. Obviously, they're often used as assistants in proving something-or-other, but there's also a direction in computer science where your computer would take a program that you wrote in a certain manner, and prove certain properties about it, e.g. that it is not possible to get out of array bounds in your C program...
    *yawn*
    time to sleep
  • by nigham ( 792777 ) on Wednesday April 06, 2005 @04:24AM (#12152135) Homepage
    If you take a grad school AI course, they'll make you do proofs the way a computer does it... maybe using propositional logic. The idea is to break up the problem into a set of statements that looks quite ridiculous (e.g. NOT engine AND train AND NOT moving), and then taking pairs of these statements and mixing and matching. The result is that you determine your sequence steps by simple trial and error or by trying to combine the propositional symbols (AND, NOT etc) and the variables (train etc). Once you generate a proof, its just a list of such statements which evaluates to a FALSE or a TRUE value but if you want to understand the proof, its hopeless.

    I doubt the human proof system will go away completely - even if we can check nasty theorem proofs using computers, we still need humans to sit and explain what they mean.
  • by ceeam ( 39911 ) on Wednesday April 06, 2005 @04:48AM (#12152207)
    What do they use for this field? Prolog still or something new(er) has been invented? Or they do what businesses do and use whatever comes handy or whatever is the current fad?
    • by Maljin Jolt ( 746064 ) on Wednesday April 06, 2005 @05:28AM (#12152324) Journal
      A functional Haskell language is the tool of the day. However, there is nothing in Haskell that could not be done in Prolog with proper problem domain language defined by grammar. H. is just fancy looking, out of the box. Unlike Lisp.
      reverse :: [a] -> [a]
      reverse = foldl (flip (:)) []
      applied to a finite list of any type, returns a list of the same elements in reverse order.
    • Actually, the proofs under discussion are not generated by a computer nor do they use "computer reasoning systems" like Prolog. These are computer-aided proofs, where the outline of the general argument is constucted by a (human) mathematician, but some of the details are reduced to computations, which are done by computer. I don't know in what language these particular computations were done, but I know that the computations for the computer-aided proof of the double-bubble conjecture [ucdavis.edu] was done in C++, a
  • I love proofs (Score:3, Interesting)

    by n-baxley ( 103975 ) <nate@baxle y s . org> on Wednesday April 06, 2005 @07:04AM (#12152634) Homepage Journal
    I loved doing proofs in my High School math classes. I even had my own book of axioms, thereby cementing my destiny to be reading /. someday. I didn't become a mathematician, but the merger of logic and creativity was so appealing. I wish I had been able to take it further, but unfortunatly calculus came along and whipped my but, so instead of an engineer, I became a programmer. Go figure.
  • Issac Azimov story (Score:5, Interesting)

    by Conspiracy_Of_Doves ( 236787 ) on Wednesday April 06, 2005 @07:20AM (#12152715)
    I recall a story I once read by Issac Azimov about a future culture where all knowledge of mathematics has been lost to humans, who have to rely on computers and calulators to do even the simplest math problems (older computers make the new computers and humans are left completly out of the process).

    A janitor at a science lab rediscovers the 'ancient knowledge' on his own. The military quickly gets ahold of it and immediatly puts it to use in weapons research, whereapon the janitor promptly takes his own life in shame.

    Anyone think there might be a future where humans rely on computers so much that they don't bother learning math at all any more?
  • by hey! ( 33014 ) on Wednesday April 06, 2005 @08:15AM (#12153030) Homepage Journal
    don't produce proofs. They produce insights that make proofs feasible.

    An insight that enabled a five page proof of Fermat's Last Theorem using nothing more than high school algebra would be a major result, even though much more complex proofs exist.
  • Learn this! (Score:3, Insightful)

    by Jagasian ( 129329 ) on Wednesday April 06, 2005 @08:55AM (#12153348)
    Anybody involved in computers needs to learn this one important fact: AI fanatics time and again over promise and over sell their trade. Every few years, you get another AI snake oil salesman trying to claim that the turning point is just around the corner. But people that are actually educated in the problems at hand, those who have actually seen what AI has promised and can do, realize that it is all hype!

    Proof generation can only be partially automated. It still requires massive human intervention, from choosing what to prove and then how to go about proving it. When you have computers automatically generating a proof of, say, Godel's Incompleteness theorem, it is actually a computer that was fed an intricately encoded version of of the theorem, along with some form of hint as to how to go about proving the theorem.

    Not to mention that there is little use in a computer re-proving something that has already definitively been proven.

    The difficulty arrises from the fact that there are absolute limitations to what computers can do. These limitations have been proven many decades ago... BEFORE THE CREATION OF ELECTRONIC COMPUTERS! But to the ignorant, these theoretical limitations do not matter... simply because they do not understand how absolute they are.

    So again, learn this, you will time and again hear AI snake oil salesman spouting off crap about automating mathematical research and automating programming, etc. (Both are in a sense equally difficult as proofs can be seen as programs and visa versa.) In reality, at best, automated theorem proving is actually a tool that can be used to help mathematicians and computer scientists to do what they have been doing already. The level of automation varies, but in general is very low.

    A human still has to drive the entire process, similar to how a car automates walking, but doesn't automate where to go and how to get there.
  • by Doug Dante ( 22218 ) on Wednesday April 06, 2005 @09:05AM (#12153466)
    This is in regular human talk. It's not a true proof.

    Imagine that you have a map that has five colors where every color is touching every other color.

    It's on a piece of paper beneath a coffee can.

    One color must be on the edge. There may be only one color, or there may be many, but at least one will exist.

    Move the coffee can until you see that color. Lets say red.

    Now imagine taking a marker and covering the rest of the page in red. (Alternatively, stretching the red area around the rest of the page).

    Now you have four colors who are all touching one another, and who are all touching the outer red page.

    Imagine that you have a tetrahedron. This is a 3D shape which has four sides. Imagine that each side of the tetrahedron has a color. Say, green, white, blue, and yellow.

    Imagine that you poke a hole in the tetrahedron and spread it out flat - placing it on the red piece of paper.

    No matter where you poke the hole, all four sides will touch one another.

    However, you must poke a hole so that all four colors are on the border, i.e. so that they all touch the red paper.

    Look a the green side. All of the sides are equivalent.

    If you poke a hole in one of the corners, the opposite side will be in the center when you lay it down, and it won't touch the red. It won't work.

    If you poke a hole on one of the edges, it's even worse. Two of the opposite sides will be in the center, and neither will touch the red. Again, it won't work.

    Worst of all, if you poke a hole directly in the center of the green side. Then all three other sides will be in the center when you lay down the tetrahedron and none of them will touch the red. That won't work either.

    Therefore, there's no way to poke a hole such that all four sides will touch the red when the tetrahedron is laid down, so it was impossible to have a map with 5 colors, all of which were touching one another, in the first place.

    Came up with this in 1990. Never formalized it mathematically. It may not be a proof, but it makes logical sense to me.

    doug@dougdante.com

"Pok pok pok, P'kok!" -- Superchicken

Working...