Forgot your password?
typodupeerror
Education Science

Are Computers Ready to Create Mathematical Proofs? 441

Posted by michael
from the GIGO dept.
DoraLives writes "Interesting article in the New York Times regarding the quandary mathematicians are now finding themselves in. In a lovely irony reminiscent of the torture, in days of yore, that students were put through when it came to using, or not using, newfangled calculators in class, the Big Guys are now wrestling with a very similar issue regarding computers: 'Can we trust the darned things?' 'Can we know what we know?' Fascinating stuff."
This discussion has been archived. No new comments can be posted.

Are Computers Ready to Create Mathematical Proofs?

Comments Filter:
  • by dolo666 (195584) on Tuesday April 06, 2004 @10:01PM (#8787939) Journal
    > 'Can we know what we know?' Fascinating stuff.

    Reminds me of Rumsfeld [about.com]... "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."
    • by Anonymous Coward
      ???
      BOOM!

      "Cleanup in aisle 10"
    • Are you sure?

      KFG
    • Re:Rumsfeld, anyone? (Score:5, Informative)

      by Tower (37395) on Tuesday April 06, 2004 @10:48PM (#8788316)
      Actually, this is three of the four quadrants of knowledge...

      KK | KD
      ___|____
      |
      DK | DD

      So, you can:
      Know that you Know something
      Know that you Don't Know something (the second most common)
      Don't Know that you Don't Know something (most things fall in this category for most people)
      Don't Know that you Know something (the most interesting of the categories)

      Big, huge, red tape operations can easily fall into the latter category (DK)... since it is rather easy for a group to obtain knowledge, yet be unaware of it [political commentary omitted].
      • by Deraj DeZine (726641) on Tuesday April 06, 2004 @11:11PM (#8788464)
        Well, I had been in the don't care that I don't know category; now I'm in the don't care that I know. Great.
      • Re:Rumsfeld, anyone? (Score:5, Interesting)

        by cs (15509) <cs@zip.com.au> on Tuesday April 06, 2004 @11:35PM (#8788631) Homepage
        Can't remember where I came across this, but for entertainment:

        Men are four:
        He who knows and knows that he knows; he is wise, follow him.
        He who knows and knows not that he knows; he is asleep, wake him.
        He who knows not and knows that he knows not; he is ignorant, teach him.
        He who knows not and knows not that he knows not; he is a fool, spurn him!

        I'm sure it's coloured my attitude to supporting users:-)
      • Re:Rumsfeld, anyone? (Score:5, Interesting)

        by sbaker (47485) on Wednesday April 07, 2004 @12:35AM (#8789028) Homepage
        I'm not sure that there are only those four catagories.

        Godels theorem pretty much says that there are things that you can NEVER know are true or false...and that in some cases you can PROVE that you can never know them.

        This leads to some other states that your knowledge can be in:

        CERTAINTY OF UNCERTAINTY: In some cases you can know (by solid mathematical proof) that you can never know some particular thing. For example. we know with absolute certainty - that you can't solve the 'Halting Problem' (to prove whether an arbitary computer program will eventually halt or whether it'll run forever). That's something we KNOW we'll never be able to do no matter how smart we get. That's not the same as KK/KD/DK or DD. It's tempting to lump this in with KD - but in the case of simply being aware that we are ignorant of something, we might take steps to resolve that ignorance. In this case, we know that this is something we cannot EVER know.

        CERTAINTY OF POTENTIAL CERTAINTY: If you know the current "largest prime number" - then by definition, you don't know of a larger prime - but you DO know that you could definitely find a larger prime if you really wanted to.

        UNCERTAINTY OF POTENTIAL CERTAINTY: Then there are other things that you might not know whether they are knowable or not - such as the proof of some of the classic mathematical problems. Before Fermat's last theorem was proved, nobody was sure whether it could be proved or not.
      • by benna (614220)
        There was a young man who said, "though
        it seems that I know that I know,
        what I would like to see
        is the I that sees me,
        when I know that I know that I know."

        Great poem, Alan Watts used to quote it alot when talking about the illusion of the ego.
    • by Anonymous Coward
      "I don't know, a proof is a proof. What kind of a proof is a proof? A proof is a proof and when you have a good proof it's because it's proven."

      (PM Jean Chretien [bayrak.ca], when asked what kind of proof he would need of weapons of mass destruction in Iraq before deciding to send Canadians along on the Bush invasion-September 5th on CTV news)
  • by Anonymous Coward
    ...precisely why it's so hard to see that a pyramid of cannonballs is an optimal stack? This seems almost axiomatic.

    I guess the obvious Monte Carlo simulation doesn't constitute "proof," but still, what exactly is the big question here?
    • by stephentyrone (664894) on Tuesday April 06, 2004 @10:11PM (#8788019)
      It's not hard to *see*. It's hard to *prove*. Very little of mathematics is consumed with proving deep, mystical statements that no one would ever anticipate to be true. Much (maybe most) of mathematics is built around proving (relatively) obvious things. Why bother? because sometimes, relatively obvious things turn out to be false, and there's no way to know that they won't until you've prooved them true. In general, showing that discrete (or semi-discrete) phenomena are optimal is fairly tricky; you can't just appeal to calculus to optimize some function. You often have to somehow break the search space up into a bunch of disjunct cases that span all the possibilities, and be able to prove that they span all possibilities. Then, if you're lucky, you can use some kind of calculus-type argument on the continuous spaces you're left with.
    • Sometimes a precise mathematical analysis can lead to startling insights. Perhaps there are multiple solutions equally optimal. Perhaps it gives a general algorithm that works for other shapes.
    • Most people just get hung up on the term "brass monkey" and snicker.
    • What if the stack is very large, for instance? Is it still obvious that the pyramid is best? Consider a similar problem, that of packing squares in a rectangle. When the rectangle is small, it is obvious that the best way is to pack them together starting from one corner, leaving a little wastage on the opposite two sides. It seems strange, but it is true, that when the recangle is very large, this strategy is not optimal. More squares can be packed when they are not packed in such a regular pattern.
    • Definition of optimal: At least as good as EVERY other possible configuration of (in this case infinitely many) oranges in 3-space.

      If by "see" you mean "conjecture" then, yes, it is easy to see. If by "see" you mean "prove", and thereby catch a glimpse into the fundemental reasons WHY this is the best possible stacking, then it is exceedingly difficult.

      How would you go about designing a proof that would take into account every single one of the infinitely many configurations of oranges? Could you even l
  • Create vs. Verify (Score:5, Informative)

    by Squeamish Ossifrage (3451) * on Tuesday April 06, 2004 @10:03PM (#8787954) Homepage Journal
    The headline does a slight disservice in describing the article that way: Whether or not computers can create proofs isn't an issue. The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted.
    • by Vancorps (746090) on Tuesday April 06, 2004 @10:10PM (#8788014)
      If humans created the computer to do the task should they not trust that it would do the task and do it well? Perhaps, perhaps not.

      A computer could quite easier come up with a very complex answer simply because it can do more calculations in a given second than a human can. Of course humans take in a lot more variables at a given time so the numbers are actually very opposite but I'm sure you get my point.

      I think you test it with progressively more different problems, if the answers come out precise and accurate then you can build your level of trust in the system. Kind of like the process of getting users to trust saving to the server after a bad crash wiped everything because the previous admin was a moron.
      • by panaceaa (205396) on Tuesday April 06, 2004 @10:59PM (#8788389) Homepage Journal
        Fortunately for you software researchers haven't programmed computers to create their own long sentences with so many prepositions that human readers of the created sentences are unable to remember the subject or figure out which verb, or possibly adjective, the trailing adverb applies to by the time they have read the entire sentence yet!
    • indeed (Score:5, Insightful)

      by rebelcool (247749) on Tuesday April 06, 2004 @10:22PM (#8788091)
      On a similar topic, today I attended a lecture by Tony Hoare [microsoft.com] on compilers that can generate verified code and tools that guide the human programmer into designing programs that can be easily validated (from the compiler's stance). One very good question raised afterwards was, well how do you know you can trust the compiler generating the verified program?

      Though Dr. Hoare danced around that question a little, presumably that aspect of the project would have to be done by hand, a monumental task to say the least.

      • Re:indeed (Score:5, Informative)

        by Squeamish Ossifrage (3451) * on Tuesday April 06, 2004 @11:50PM (#8788742) Homepage Journal
        At least some provable properties can be "pushed" through the compilation process all the way to the resulting object code. If you're interested, you can look into proof-carrying code and typed assembly language (papers by Necula, Appel, Walker, Zdanzewic, Crary and a cast of thousands.)

        The resulting proofs are still hairy enough that they have to be checked by machine, but the size and complexity of the proof-checker is much less than that of the compilation toolchain. That means that while there's still some code that has to be trusted, it's much less. Here's my informal scariness hierarchy:

        Normal model (you have to trust everything) > type safe languages (you have to trust the compilers / interpreters) > proof-carrying code (you have to trust the proof-checker*).

        If you haven't already, you should definitely read Ken Thompson's Turing Award lecture, "Reflections on Trusting Trust" here [uoregon.edu].

        * - Pedantry point: If you're talking about Necula's original PCC work, you also have to trust the verification condition generator, which is some fairly deep voodoo. Appel's Foundational PCC addresses this to a signficant extent.
        • Re:indeed (Score:5, Interesting)

          by maxwell demon (590494) on Wednesday April 07, 2004 @05:30AM (#8790287) Journal
          While the article spaks about intentional errors, I could imagine the same to be true for bugs (while introducing a self-replicating bug by accident is unlikely, I'm not sure if it's unlikely enough to never occur).

          Say, an optimizer is buggy. Say, in a certain line of code in the compiler, there's a '>' instead of a '<' sign. Unfortunately, this causes, under certain, rare conditions, the optimizer to miscompile '<' into '>'. Now, since triggering that bug is rare, it goes unnoticed for a while, and the buggy compiler is installed as default compiler for further development.

          Now, at some later time, the bug is found, a developer looks at the code and finally finds the error. He replaces the '>' by '<' in the hope of fixing the bug. Unfortunately, he doesn't recognise that this very comndition now triggers the bug, having the compiler internally replacing '<' to '>' again, replicating the bug again.

          Now, the test case will show that the bug is still there, but the developer will simply not find the bug in the code. After all, the bug isn't in the source any more. It's only there in the compiled program. At some later time, though, it might magically disappear by some unrelated change nearby, which causes the bug not to be triggered any more.
    • The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted.

      When something is too involved for a human to accomplish -- isn't this what we have computers for? So the solution is simple. Produce another system to verify the first system's proof. Ideally, this second system should be built by a group completely separate from the first so as not to inherit any design flaws or false assumptions.

      The same methodology could be applied to

    • "The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted."

      Just to clarify, do you mean the computer's accuracy in moving the bits around, or do you mean the human who wrote the program didn't make any mistakes?
    • by cgenman (325138) on Tuesday April 06, 2004 @10:43PM (#8788278) Homepage
      Don't we already have tons of resources devoted to verifying the accuracy of a computing environment? If we verify the logic and accuracy of the computer, much like the axioms of mathematics, cannot we then say that the resulting proof must be valid?

      Of course, if it can't be understood by humans, it doesn't really help anything, as one of the main points of proving things is to gain insight into how to prove other things. But wouldn't building the system to proove proofs be itself a different but valuable tool?

  • by FFFish (7567) on Tuesday April 06, 2004 @10:04PM (#8787958) Homepage
    Depends whether it's a Pentium with an FDIV bug, I imagine...
    • by Anonymous Coward
      You know what? I work for Intel. My company paid 1/2 billion dollars replacing Pentium chips with that bug that nobody would ever notice except maybe six people. Software vendors get away with murder. They don't even have to replace their product even if it is not fit for it's stated purpose. The FDIV bug was a typo in some kind of table. Doesn't that make it software in a way? One little bug years ago and still people laugh about it (SCORE:4, Funny).
  • by Anonymous Coward on Tuesday April 06, 2004 @10:04PM (#8787959)
    I think the issue will become: can we learn anything on our own, we don't want to rely on imposibly long proofs. My calc teacher hates us using calculators, she thinks it will be the end of calculus as we know it.
    • My calc teacher hates us using calculators...

      So do I. They should be banned from school, just like guns. It's one thing to use them in business for quick work. They have no place in school where you're supposed to be learning how to add.
      • After algebra, one should do most calculations with variables in the result. The area of a circle with a radius of 3 is simply 9*pi, on a test why does it matter if the student plugs the number in to get a decimal number?
        • The area of a circle with a radius of 3 is simply 9*pi, on a test why does it matter if the student plugs the number in to get a decimal number?

          Because you're just repeating the formula. The answer is single number, real or otherwise. I need to know the actual area of the circle, not the formula for figuring it out.
      • Beyond algebra, the point of the test is not to test arithmetic skills, it's to test whatever area you're learning. By the time you learn algebra, you've spent roughly 6-8 years learning arithmetic, so there's not much point in beating it out of you any more. You know how to do it, it's just a matter of how many careless mistakes you make.

        For some tests, you shouldn't be allowed to use a calculator (knowing basic sine and cosine values, for example, or numerical integration without a calculator). Other
    • by Bastian (66383) on Tuesday April 06, 2004 @10:40PM (#8788244)
      I think the "as we know it" is the key word here. It creates a new juggling act that teachers have to deal with. On one hand, calculators are extremely useful for the things at which humans are error prone. If students can use calculators or Mathematica or something, they can check their arithmetic much more reliably, which is great for isolating problems with learning concepts from basic mathematical errors. Granted, some people don't see this as a bonus since it's good to be able to do arithmetic reliably without aid. For smaller numbers I agree, but in the real world people use calculators for arithmetic with numbers that have a lot of sig figs. Making students do this arithmetic by hand is just distracting them from learning the concepts they are supposed to be learning.
      On the other hand, many students are prone to using these devices and applications as crutches and try to get away with doing things like using their calculator's implementation of Newton's Method instead of solving the problem themselves.

      Some professors have found solutions to this problem, others havent. When I was at college, I think our math department had achieved a pretty good level of harmony with Mathematica - we were expected to do a lot of stuff by hand or in our heads - Gaussian elimination, for example - but in order to make the math seem useful, we were also exptected to be able to solve real-world problems with the stuff we learned. Not contrived "real-world" problems from your high-school textbook, but stuff like interpreting large and dirty scientific datasets where the specific technique we would have to use to solve the problem was something we could figure out, but not something that had been explicitly laid out by the textbook or in lecture. We had to apply the concepts we had learned to figure out the problem, but there was no way we were going to chug out that arithmetic by hand - when was the last time you tried to work on a 16x16 matrix using a pencil and paper? How about a 100x100 one?
  • I think so, yes. (Score:3, Informative)

    by James A. M. Joyce (764379) on Tuesday April 06, 2004 @10:04PM (#8787966) Journal
    I mean, we've used computers to prove much of boolean and linear algebra. The most famed result in the field is that of the Robbins Conjecture [google.com], proven entirely by computer. The computer produced a very "inhuman" proof...
    • by Quill345 (769162) on Tuesday April 06, 2004 @10:15PM (#8788048)
      Automated theorem provers have been around for a long time, if you can express your thoughts using first order logic. Here's a program from 1986... lisp code [cuny.edu]
    • You beat me to it (Score:5, Informative)

      by UberQwerty (86791) on Tuesday April 06, 2004 @10:44PM (#8788280) Homepage Journal
      I professor showed me the Robbins Algebra proof a while ago. I was going to link here [anl.gov], but first I searched the page for (Score:5, Informative), and there you were :)

      Here's an excerpt:

      In 1933, E. V. Huntington presented [1,2] the following basis for Boolean algebra:
      x + y = y + x. [commutativity]
      (x + y) + z = x + (y + z). [associativity]
      n(n(x) + y) + n(n(x) + n(y)) = x. [Huntington equation]

      Shortly thereafter, Herbert Robbins conjectured that the Huntington equation can be replaced with a simpler one [5]:
      n(n(x + y) + n(x + n(y))) = x. [Robbins equation]


      Robbins and Huntington could not find a proof. The theorem was proved automatically by EQP [anl.gov], a theorem proving program developed at Argonne National Laboratory.
  • by The Beezer (573688) on Tuesday April 06, 2004 @10:06PM (#8787976) Journal
    and there are already programs out that help with this. Here's [gatech.edu] one for example...
  • Text of the article (Score:2, Informative)

    by Anonymous Coward

    In Math, Computers Don't Lie. Or Do They?
    By KENNETH CHANG

    Published: April 6, 2004

    leading mathematics journal has finally accepted that one of the longest-standing problems in the field -- the most efficient way to pack oranges -- has been conclusively solved.

    That is, if you believe a computer.

    The answer is what experts -- and grocers -- have long suspected: stacked as a pyramid. That allows each layer of oranges to sit lower, in the hollows of the layer below, and take up less space than if the oranges
  • by kommakazi (610098) on Tuesday April 06, 2004 @10:11PM (#8788016)
    Computers are a human creation...it's not a matter of whether we can trust the computer, but rather a matter of can we trust that the people who built the computer and coded the software it runs knew what they were doing and didn't make any errors. Computers can only do what we tell them to...so really it was humans who indirectly made the proofs by producing a system capable of doing so. All it really boils down to is whether the folks who made the system and it's software knew what they were doing or not and whether they made any errors or not.
  • do you mean that this picture will be true someday [google.com]? It does seem a lot friendlier then SKYNET.
  • no (Score:2, Funny)

    by sporty (27564)
    no.

    q.e.d.

  • Theorem Provers (Score:4, Informative)

    by bsd4me (759597) on Tuesday April 06, 2004 @10:12PM (#8788025)

    Theorem provers [wikipedia.org] have been around for a long time. A net search should turn up a ton of hits. The key is to implement a system that can be verified by hand, and then build on it.

  • by colmore (56499) on Tuesday April 06, 2004 @10:13PM (#8788027) Journal
    20th century mathematics has seen some pretty amazing things, but at the same time, there are very real questions as to what constitutes "proof" any more.

    consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time)

    Now that means that it's *probably* true, but nobody accepts that as mathematical proof.

    On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

    What's the difference between these two cases really? What's the difference between these and relying on computer proofs that are, again, *probably* right?

    In this light, the math of the late 19th century and early 20th century was something of a golden age, modern standards of logical rigor were in place, but the big breakthroughs were still using elementary enough techniques that the proofs could be written in only a few pages, and the majority of mathematically literate readers could be expected to follow along. These days proofs run in the hundreds of pages and only a handful of hyper-specialized readers can be expected to understand, much less review them.
    • On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

      Wouldn't this be a good use for the compute
    • by I Be Hatin' (718758) on Tuesday April 06, 2004 @10:55PM (#8788368) Journal
      consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time) Now that means that it's *probably* true, but nobody accepts that as mathematical proof.
      On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.
      What's the difference between these two cases really?

      That one claims to be a proof and the other doesn't? You simply can't prove the Riemann Hypothesis by testing trillions of numbers (though if you find one case where it fails, you have disproved it). As a simple example, I can find trillions of numbers whose base-10 expansion is less than a googolplex digits long. Does this mean that all integers have this property? Of course not... So even if all of the calculations are right, you still don't have a proof.

      On the other hand, the classification of finite simple groups does claim to be a proof, and if there are no errors, it is a proof. You're right that there are probably errors, but these may be only minor errors that can be fixed. At least no one seems to have found evidence that the proof is completely flawed yet. But it's certainly possible that someone will find an insurmountable error in one of the proofs. There have been cases of propositions that were "proved" true for more than 80 years before a counterexample was found.

      What's the difference between these and relying on computer proofs that are, again, *probably* right?

      Again, it depends upon what the computer is trying to show. The computer proofs I'm familiar with are ones where the methods are documented, it's just that the computations are too tedious to do by hand. So you can read the proof and say "modulo software bugs, it's a proof". And then it works the same as science: anyone who wants to can repeat the proof for themselves, and see that they get the same answer. As more people validate these results, the likelihood of bugs goes down exponentially, and the likelihood of the proof being accepted increases.

      • Better example of why finding trillions of correct values for the Riemann hypothesis:

        The Mobius function is a number theoretic function with the value:

        M(n)= 1 if n=1
        = -1^r if n = p1*p2...*pr where p1-pr are distinct primes
        = 0 otherwise.

        A mathematician, Frantz Mertens, conjectured in 1897 that sum(M(k), k=1 to n) < sqrt(n). This conjecture has been proven by computer for all n < 10^9. However, in 1984, Andrew Odlyzko and Herman te Riele proved that this conjecture was false for some n <= 3.2

  • maturity (Score:3, Insightful)

    by jdkane (588293) on Tuesday April 06, 2004 @10:15PM (#8788050)
    whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.

    As far as calculator example goes, I believe a person should understand the fundamentals before using the calculator. Don't give a power tool to a kid. Somebody with an understanding of the fundamentals can wield the tool correctly and wisely whereas some cowboy is just dangerous. The old saying comes to mind "know just enough to be dangerous". As far as those oranges in the article, well somebody had better figure out a way to confirm the computer answer, without having to go through the exact same meticulous steps. Don't put your trust in technology without the proof to back it up, because technology built by people is prone to error. Now I'm sure the orange problem won't cause harm to anybody, however I hope I never read such an article about a nuclear power plant!

  • by ameoba (173803) on Tuesday April 06, 2004 @10:16PM (#8788057)
    I don't see what the big deal is; not only is the general problem of proving mathematical statements undecidable (even without considering Godel's theorem) but even solvable problems require a lot of human intervention to get solved. Most problems (ie - examples out of math textbooks) aren't going to come up with a proof in any reasonable amount of time by simply dropping it into a theorem prover and pushing "go".

    "Automated theorem proving" is less an automatic process (like you'd get with an automated production line) and more of a mechanical assistance to the job (like using a fork-lift to move heavy things faster than you could by hand).

    It not only takes work to convert a problem into a good representation, but then you have to structure the problem statement in such a way that a theorem prover can make optimal use of it. Often times, you're forced to, upon following the output, prove lemmas (sub-proofs).

    Then, when you finally get a proof, you get the joy of trying to simplify it to something that -can- be understood by a person; again, this is part of the process that can't really be automated well.
  • change the title (Score:5, Insightful)

    by NotAnotherReboot (262125) on Tuesday April 06, 2004 @10:17PM (#8788058)
    Change the title to: "Are Computers Able to Verify Mathematical Proofs Beyond All Doubt?"
    • No matter what the proof, don't we still have to accept blindly on faith that A=A?

      Sure, it seems highly probable but.. I just.. I guess I'm a skeptic. All of logic seems like a joke to me, as long as this one little potentially huge loophole looms in the background...
      • Re:identity (Score:5, Informative)

        by sbaker (47485) on Tuesday April 06, 2004 @11:58PM (#8788787) Homepage
        Some things have to be taken as Axioms. A=A is one of them. All the things you prove that rely on A always being equal to A can be taken as true PROVIDING you accept that axiom. We like to pick axioms that we have a gut feel 'must' be true - but you can do interesting mathematics by denying some of those axioms and see where they take you. The classic geometric case of denying that parallel lines never meet produced a whole range of interesting geometries that don't exactly represent the real world but none-the-less have interesting and useful consequences.

        So - when you come across a theorem that you can't prove but are pretty damned certain is true - you COULD choose to simply make it be an axiom. The problem is that the theorems you are subsequently able to come up with are only as reliable as your initial assumption of that axiom.

        If your axioms eventually turn out not to match the real world - then all you have is a pile of more or less useless theorems that don't mean anything for the real world.

        It's therefore pretty important to stick with a really basic set of axioms to reduce the risk that an axiom might turn out not to be 'true' for the real world and bring down the entire edifice of mathematics along with it.

        If we ever found some sense in which A!=A then every single thing we thought we knew about math would be in doubt.
      • Re:identity (Score:4, Interesting)

        by illuminatedwax (537131) <<stdrange> <at> <alumni.uchicago.edu>> on Wednesday April 07, 2004 @12:26AM (#8788967) Journal
        You should definitely read Carroll's Paradox [mathacademy.com]. Lewis Carroll had thought about this exact problem. I think that assuming logic on faith, however, is an acceptable step in human endeavours.

        --Stephen

  • Godel? (Score:2, Informative)

    by Anonymous Coward
    Doesn't the Godel incompleteness theorem say they can't?
  • by bomb_number_20 (168641) on Tuesday April 06, 2004 @10:21PM (#8788090)
    The answer is left as an exercise for the reader.
  • by Polo (30659) * on Tuesday April 06, 2004 @10:23PM (#8788096) Homepage
    Mathematics is just Symbol Manipulation. I suspect computers are pretty good at that.

    Also, chess is just Pattern Matching... I don't know if humans have the edge there or not. ;)
  • regfree link (Score:3, Informative)

    by werdnapk (706357) on Tuesday April 06, 2004 @10:23PM (#8788097)
  • by Pedrito (94783) on Tuesday April 06, 2004 @10:23PM (#8788102) Homepage
    My father sent this to me first thing this morning. I told him that I didn't think that rigorous mathematical proofs should be based on software either in whole or in part. All software of any complexity is inherently buggy. That doesn't mean that rigorous mathematical proofs are flawless by nature. That's why they have peer review. But peer review on software still isn't always sufficient.

    Another issue is that you're then excluding any mathematicians who aren't also fairly adept programmers, from really understanding your proof.

    All of this said, computers are necessary to do math these days and I think mathematicians should make use of them. I just don't believe we've reached a level of maturity in software development that meets the stringent requirements of mathematical proofs.
  • Why not.. (Score:3, Interesting)

    by Sir Pallas (696783) on Tuesday April 06, 2004 @10:23PM (#8788103) Homepage
    ..if you can prove the program. I know that a lot of people look down on axiomatic semantics and model checkers; but I also know some people that started in that area a long time ago that still believe in it, and even more that are trying to get faculty appointments out of this rebounding field. If you can prove a program does what you expect it to and it, it turn, can be shown to prove what you really wanted, I don't see the problem. Maybe some of our theoretical mathematicians just need a dose of practical computer science.
  • The article says that published proofs generally skip steps. I'd bet the humans make more mistakes than the computer. Actually, Wiles's proof of Fermat's Last Theorem origanally had a mistake which was fixed later.
  • by Odin's Raven (145278) on Tuesday April 06, 2004 @10:28PM (#8788140)
    'Can we trust the darned things?' 'Can we know what we know?'

    The obvious solution is to have the computer create a new proof that shows that the algorithm it used to create the original proof is, in fact correct.

    And to prove that the proof of the proof can be trusted, have the computer create a proof of the proof of the proof.

    And to prove that the proof of the proof of the proof can be trusted, ...
  • by Anonymous Coward
    What I see on the bottom of the page as I read this topic discussion is:
    The truth of a proposition has nothing to do with its credibility. And vice versa.
  • Wrong Summary (Score:2, Informative)

    by DougMackensie (79440)
    This story and this issue are not about whether or not the mathematical community trusts a computer created proof. The issue is whether or not the community can trust the human behind the computer to create a computer program/system that is "flawless enough". Issues and bugs may arise, and the community can't trust that these issues will 1. be found and 2. be severe enough to affect the validity of the proof.
  • Wasn't the 4 color map problem solved in a similar fashion, in the late 1970's, by Ken Appel? This is nothing new, there are just some problems you are not going to solve by hand.
  • by briaydemir (207637) on Tuesday April 06, 2004 @10:31PM (#8788171)

    The problem with computer generated proofs is that in order to trust the result of the computer, you have to trust:

    1. The program(s), as source code, that generated the proofs.
    2. The compiler(s) that turned the source code into something executable.
    3. The hardware that ran the programs.

    And of course, our understanding of the hardware depends on how accurate our understanding of the laws of physics is. Any mistake in either the source code, compiler, or hardware, and potentially the proof produced is incorrect. That's an awful amount of stuff you have to check just to make absolutely sure the computer is correct. Then, consider how many bug-free pieces of software you've encountered. ... Yeah, I can see why mathematicians would not trust computer generated proofs.

    Of course, people are not infallible either, but that's well known and expected. It's all about how much uncertainty people are willing to accept.

  • Math is constantly limited by the continuous need to be rigorous. It's not a bad thing, but imagine the potential. A mathematician (or even, a semi-informed layman) would be able to input a logical statement and have the computer determine the validity, special cases, problems, and so forth.

    The hard part would become formulating the statements. For science, this result would be immeasurable. Ask the computer if it can be done, and get an answer. Knowledge could increase exponentially! (No, really --

    • once upon a time, only advanced mathematicians knew calculus, but now we learn it in high school. Just wait until warp theory is an entry level college engineering course

      Once upon a time, the majority of adult males knew how to trap a rabbit (or similar creature), gut it, skin it, start up a fire, cook it, and eat it.

      I don't.

      Heck, I couldn't even look up at the sky at night and tell you which way was north.

      Once upon a time, most people could.

      All I'm saying is that the amount of knowledge and skills the average human being can possess will not increase expontentially over time (barring artificial manipulation). We gain new skills as a population and lose old ones.
      • Insightful up to the last paragraph. Sure I can only learn so much (knowledge and skills). Becoming good at playing the guitar means a trade off, and perhaps by making that choice you don't learn to skin rabbits, fly airplanes, and so on. Yet some have done several of the above. There is enough knowledge that you cannot learn it all. As a population we do not lose old skills, at least not near as often as we learn new ones.

        Some things take years to learn, some take minutes to learn and years to mas

  • by jockeys (753885) on Tuesday April 06, 2004 @10:32PM (#8788187) Journal
    as a graduate in the fields of mathematics, i spent a large portion of my five undergraduate years doing proofs. there are a great many ways to prove things, sometimes applicable sometimes not. (e.g. using inductive proofs for numeric theorems is all well and good but completely useless for any sort of ring-theory or spatial proofs)

    there are also several levels of depth for proofs, ranging from "i've found a counter example so i can write the whole thing off as garbage" to "i have exhaustively and rigorously proved this starting with the basic axioms of number theory and worked my way on up"

    the latter is really the only acceptable way to prove anything seriously. sometimes when you are reworking an already- done proof to illustrate a point, other mathematicians will allow a bit of latitude when it comes to cutting corners, but for a proof as far-reaching as the one in the article, i would only be interested in a "rigorous" proof, that is, one that started with the foundational tenets of mathematics and combined those to form and prove other postulates, etc. very much a form of abstraction, not unlike large development projects.

    the problem arises when one (or several) humans have to be able to objectively check the whole thing. to use my prior example of a large development project, no one developer at microsoft understand the whole of windows. it's too big for a single human to understand. each developer knows what he needs to do to complete his part, and so on and so forth.

    traditionally, for proofs, a single mathematician (or a small group) would hammer out the whole proof, so the level of complexity remained at a human-understandable level. (even if tedious) my concern, as a mathematician, with using an automated solution would be the rapidly growing order of complexity needed to properly back up increasingly complex proofs. as stated in the article, it's like trying to proofread a phonebook. (only, you must also consider that for every element of the proof (a particular listing) there are several branching layers of complexity (fundamental, underlying proofs many layers deep) underneath. this gets more complicated in an exponential fashion) obviously this approach will only remain human-checkable for relatively small problems. (programmers: think of some horrible nondeterministic- polynomial problem like the "traveling salesman" problem. systems, like humans, are a finite resource, but if you increase the size of the problem, the complexity will quickly grow far beyond your ability to compute. large proofs suffer from the same difficulties, if not quite as concrete and pronounced as NP algorithms)

    in closing, i would have to agree that proofs, no matter the effort and computing time put into them, really should not be viewed as being as rigorous as those provable "by hand" and human- understandable, even if the computer has arrived at a satisfactory conclusion, because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has.
    • by Tom7 (102298) on Tuesday April 06, 2004 @11:51PM (#8788745) Homepage Journal
      because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has

      Sure we do -- typical theorem provers spit out a proof that can be checked by hand (god forbid), or else checked by a simple procedure. Understanding that a theorem prover is implemented correctly is tough, but understanding that a checker is implemented correctly isn't. I trust such proofs more than I trust hand-checked proofs, because humans are more susceptible to mistakes than computers are.
  • We trust everyone, Trust but verify. Why should machines be any differnt. If I ask you to do a math problem do I trust you with the correct answer, sure I do, but I still verify. I preform the calculation my self, and allow others to preform the calculation. If we all come up with the same result we most likly have the correct answer. We must be careful not to allow the problem to define the solution method as that allows for us all to preform the same mistakes. Just like the Seti@home and distributed c
  • Mathematics is one of the most intensely human of human endeavours. Everything in it is a production of the human mind entirely. Yes, the real world can sometime lead us into an interesting area of inquiry, but at its core the uncoverings of truth from axioms is a human endeavour.

    A computer can be a useful tool (I'll be doing computational graph theory this summer), but it is not human. It does not have the ability to hold the possiblities of ideal forms within it and understand. It does not think.

    The use
  • Its a machine. I'll trust it to obey the laws of physics every time. If someone doesn't design the machine to behave correctly, writes a buggy or flawed piece of code, builds a flawed model, or feeds it bad data, then its the person you can't trust.

    Machines don't make mistakes. People make mistakes.

    *singing*

    You canno' change the laws of physics, laws of physics, laws of physics!

    There's klingons on the starboard bow, starboard bow, starboard bow, Jim.

    We come in peace. Shoot to kill! Shoot to kill! Shoot
  • Flyspeck Project (Score:5, Informative)

    by harlows_monkeys (106428) on Tuesday April 06, 2004 @10:57PM (#8788374) Homepage
    Here's a link to the Flyspeck Project [pitt.edu], briefly mentioned at the end of the article, which aims to give a formal proof of the theorem.
  • by Tom7 (102298) on Tuesday April 06, 2004 @11:43PM (#8788676) Homepage Journal
    What strange timing -- I just saw a talk this morning from Hales at CMU about his work on formalizing the Kepler conjecture ("theorem") in HOL Light.

    I can understand not wanting to trust a big piece of C code that purports to check a huge number of cases of a proof doing numerical analysis. It took 6 years of 4 people trying to verify his computer proof of the Kepler conjecture--before they gave up. If a program is that hard to believe, then it does indeed deserve lesser status than a handwritten proof that can be checked by mathematicians in shorter time.

    On the other hand, there are other computer tools that we really should trust, and that are revolutionizing mathematics. The idea is simple: write your proofs in such detail that they can be mechanically checked by a simple (easy to verify) procedure. This is much better than paper proofs, because the potential for human error is minimized. (I don't think anyone will argue that published proofs have often been wrong, and the proofs have not been caught by the peer reviewers!) Since it's really, really bad to believe wrong proofs, there's a very real benefit that is offset by the sometimes tedious work necessary in formalization.

    That's what Tom Hales is doing with flyspeck, and I think it is the future of mathematics. (In fact, I have recently become addicted to mechanizing my own proofs [tom7.org] in Twelf [twelf.org]--it's not only immensely satisfying, but it helps me sleep better at night and makes for stronger papers.)
  • by starseeker (141897) on Wednesday April 07, 2004 @12:26AM (#8788972) Homepage
    'Can we trust the darned things?' 'Can we know what we know?'

    It's not an issue of can we trust them, at least not in general. (We won't go into the question of current machines - I'll agree they're generally not there for rigorous proofs.) We're going to have to either trust some form of computation aid in proof work, or throw up our hands and abandon the field - the human brain and lifespan impose definite limits beyond which we cannot go without aid, and since I can't think of any limit human beings have willingly accepted as a group somehow I doubt this will be the first. So, instead, the question should be

    "How do we create computers we can trust?"

    If that is impossible, then that's it. Mathematics will be come like experimental high energy physics - 20 years effort by 100s of people to achieve one result. But I'm not ready to concede that its impossible. I know it is provable that computers can't solve all problems in general, but the same proof indicates humans can't either. The question I'm curious about is whether the behavior of a computer is too general to be attacked by useful proof methods. Most actions taken with a computer assume a definite action and a definite outcome (spreadsheets and databases, for example, do not do novel calculations but perform the same operations on well defined data.) Mathematical proof is a different question, but the ultimate question is whether a properly designed and built computer (i.e. built as rigorously as possible in a technical and algorithmic sense) would be completely unable to handle problems that are interesting to human beings in the proof field. That is a completely different question from generality statements, and from the standpoint as computers as a trustworthy tool I think it is the more interesting one.
  • Proving the prover? (Score:3, Interesting)

    by jeduthun (684869) on Wednesday April 07, 2004 @12:28AM (#8788985)

    Perhaps I'm missing something here, but I think the approach of verifying the validity of the proofs that come out of the kind of system described in the article is fundamentally the wrong approach.

    Instead, mathematicians ought to focus on formally proving the proof generator. If it could be fomally proved that the proof generator only generated valid proofs, we could automatically trust all the proofs that it generated. Program proof and verification is a complex topic, but it's a quickly maturing area of CS.

  • Wrong question. (Score:3, Insightful)

    by ilyag (572316) on Wednesday April 07, 2004 @12:30AM (#8788998)
    The question should not be whether computers can calcualte flawlessly - that's obviously wrong. The question is whether the probability of all the different configurations of computers consistently giving the same wrong answer to a problem is greater than the probability of all the human mathematicians agreeing on the same wrong answer. To me, it seems obvious that the computers are better off...
  • by dysprosia (661648) on Wednesday April 07, 2004 @01:30AM (#8789411)
    For some cases of proof solving, a human is often behind the scenes, and has reduced the number of cases that a computer has to check from infinity to say 10^25 or some other large, but finite number.

    Computers nowadays can handle symbolic calculations and prove identities and likewise, but for identifying what is interesting to have proved or not, a human may still be there with interpreting that, no matter how sophisticated computers or software can get...
  • by elmartinos (228710) on Wednesday April 07, 2004 @02:14AM (#8789639) Homepage
    The people at the Risc Institute [uni-linz.ac.at] are creating cool stuff like Theorema [theorema.org], which helps in automatically proofing things. Some of these people teach math at a university in Hagenberg where I got the chance to see this thing in action, it is really amazing how well this works.
  • by ingenuus (628810) on Wednesday April 07, 2004 @05:01AM (#8790211)
    "On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question."
    -- Charles Babbage (1791-1871)

    Computers only do what they are told (excepting "hardware failure", which is not the topic).

    Shouldn't the validity of computational proofs be able to be determined by proving the meta-logic of the solver?
    i.e. proving that a strategy for finding a proof is valid (and therefore trusting its results).

    Maybe those wary mathematicians are just unaccustomed to working on a problem meta-logically, and prefer to find proofs directly themselves (with the meta-logic being defined solely within their own minds)?

    In such cases, perhaps peer review should not require human verification of a computational proof, but rather another independent meta-logically valid computational proof?
  • more than people (Score:3, Insightful)

    by hak1du (761835) on Wednesday April 07, 2004 @08:21AM (#8790824) Journal
    The mathematical literature is full of errors, oversights, invalid proofs, unstated assumptions, and probably even a certain share of deliberate fraud. See Lounesto's misconceptions of research mathematicians [www.hut.fi] for one expert digging into the mathematical literature.

    Computers are far better at ferretting out oversights, missing assumptions, and making sure that every t is crossed and i is dotted. If a software system for doing proofs has shown itself to be fairly reliable on a bunch of samples, I'd trust it a lot more than I'd trust any working mathematician to carry out a complex proof correctly.

The IQ of the group is the lowest IQ of a member of the group divided by the number of people in the group.

Working...