Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Education Science

Are Computers Ready to Create Mathematical Proofs? 441

DoraLives writes "Interesting article in the New York Times regarding the quandary mathematicians are now finding themselves in. In a lovely irony reminiscent of the torture, in days of yore, that students were put through when it came to using, or not using, newfangled calculators in class, the Big Guys are now wrestling with a very similar issue regarding computers: 'Can we trust the darned things?' 'Can we know what we know?' Fascinating stuff."
This discussion has been archived. No new comments can be posted.

Are Computers Ready to Create Mathematical Proofs?

Comments Filter:
  • by Vancorps ( 746090 ) on Tuesday April 06, 2004 @10:10PM (#8788014)
    If humans created the computer to do the task should they not trust that it would do the task and do it well? Perhaps, perhaps not.

    A computer could quite easier come up with a very complex answer simply because it can do more calculations in a given second than a human can. Of course humans take in a lot more variables at a given time so the numbers are actually very opposite but I'm sure you get my point.

    I think you test it with progressively more different problems, if the answers come out precise and accurate then you can build your level of trust in the system. Kind of like the process of getting users to trust saving to the server after a bad crash wiped everything because the previous admin was a moron.
  • by colmore ( 56499 ) on Tuesday April 06, 2004 @10:13PM (#8788027) Journal
    20th century mathematics has seen some pretty amazing things, but at the same time, there are very real questions as to what constitutes "proof" any more.

    consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time)

    Now that means that it's *probably* true, but nobody accepts that as mathematical proof.

    On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

    What's the difference between these two cases really? What's the difference between these and relying on computer proofs that are, again, *probably* right?

    In this light, the math of the late 19th century and early 20th century was something of a golden age, modern standards of logical rigor were in place, but the big breakthroughs were still using elementary enough techniques that the proofs could be written in only a few pages, and the majority of mathematically literate readers could be expected to follow along. These days proofs run in the hundreds of pages and only a handful of hyper-specialized readers can be expected to understand, much less review them.
  • by baywulf ( 214371 ) on Tuesday April 06, 2004 @10:14PM (#8788039)
    Sometimes a precise mathematical analysis can lead to startling insights. Perhaps there are multiple solutions equally optimal. Perhaps it gives a general algorithm that works for other shapes.
  • by Quill345 ( 769162 ) on Tuesday April 06, 2004 @10:15PM (#8788048)
    Automated theorem provers have been around for a long time, if you can express your thoughts using first order logic. Here's a program from 1986... lisp code [cuny.edu]
  • maturity (Score:3, Insightful)

    by jdkane ( 588293 ) on Tuesday April 06, 2004 @10:15PM (#8788050)
    whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.

    As far as calculator example goes, I believe a person should understand the fundamentals before using the calculator. Don't give a power tool to a kid. Somebody with an understanding of the fundamentals can wield the tool correctly and wisely whereas some cowboy is just dangerous. The old saying comes to mind "know just enough to be dangerous". As far as those oranges in the article, well somebody had better figure out a way to confirm the computer answer, without having to go through the exact same meticulous steps. Don't put your trust in technology without the proof to back it up, because technology built by people is prone to error. Now I'm sure the orange problem won't cause harm to anybody, however I hope I never read such an article about a nuclear power plant!

  • change the title (Score:5, Insightful)

    by NotAnotherReboot ( 262125 ) on Tuesday April 06, 2004 @10:17PM (#8788058)
    Change the title to: "Are Computers Able to Verify Mathematical Proofs Beyond All Doubt?"
  • indeed (Score:5, Insightful)

    by rebelcool ( 247749 ) on Tuesday April 06, 2004 @10:22PM (#8788091)
    On a similar topic, today I attended a lecture by Tony Hoare [microsoft.com] on compilers that can generate verified code and tools that guide the human programmer into designing programs that can be easily validated (from the compiler's stance). One very good question raised afterwards was, well how do you know you can trust the compiler generating the verified program?

    Though Dr. Hoare danced around that question a little, presumably that aspect of the project would have to be done by hand, a monumental task to say the least.

  • by Pedrito ( 94783 ) on Tuesday April 06, 2004 @10:23PM (#8788102)
    My father sent this to me first thing this morning. I told him that I didn't think that rigorous mathematical proofs should be based on software either in whole or in part. All software of any complexity is inherently buggy. That doesn't mean that rigorous mathematical proofs are flawless by nature. That's why they have peer review. But peer review on software still isn't always sufficient.

    Another issue is that you're then excluding any mathematicians who aren't also fairly adept programmers, from really understanding your proof.

    All of this said, computers are necessary to do math these days and I think mathematicians should make use of them. I just don't believe we've reached a level of maturity in software development that meets the stringent requirements of mathematical proofs.
  • Re:I don't care (Score:3, Insightful)

    by Vancorps ( 746090 ) on Tuesday April 06, 2004 @10:30PM (#8788165)
    That is certainly true, so if a computer screws up portion in the middle and the answer still comes out correct then the end result is the same.

    Another post mentioned that computers could come up with the proofs but it would be too hard for a human to verify. At what point can be start to both trust the output and to build upon that trusted output by further trusting future output. It could be an interesting time in which we don't understand how things are done but we are able to do them none the less. It all comes down to the question, how much do you want to trust a machine?

  • by Xilo ( 741751 ) on Tuesday April 06, 2004 @10:34PM (#8788196) Journal
    involved, not complicated - like the Four Color [Conjecture]. Noone could figure out a way to actually prove it, so some one (or group of someones) wrote a program to systematically determine all the possible arrangements of regions in a simplified series of maps, and then figure out how to color each of those maps. The involved part was .. well, all of it. It wasn't necessarily very complicated, just labor-intensive. Computers are perfectly suited for tedium.
  • by Anonymous Coward on Tuesday April 06, 2004 @10:35PM (#8788205)
    what I find interesting about him is that if you ask a stupid question, he'll make sure you know it was a stupid question.

    Prior to september 11, that attitude meant his days were numbered. After september 11, that attitude shows he's in charge.

  • by Llywelyn ( 531070 ) on Tuesday April 06, 2004 @10:37PM (#8788220) Homepage
    Sometimes it is enough to know that something is *true* or *false* without having to know the details of the in-between steps.

    I'll give some trivial examples to illustrate:

    For a famous example, it would provide a great deal of peace of mind if we could prove that P != NP. It wouldn't matter that we understand that proof so much as that it is *provably true*. If, on the other hand, it is proven false that is just as important (if not more so) and while an understanding of the proof might lead more easily to examples of such, we would know (for certain) that trusting public key encryption over the long run would be a Bad Idea(TM) (for example) and that it is just a matter of time before a polynomial time algorithm is developed.

    (Not that such will necessarily be fast, mind you, but we would know it would exist).

    For another example--there are certain things that can be inferred if Poincare Conjecture is true for N=3. If we can prove the Poincare Conjecture is true (and it is now thought that it might be) it means things to physicists, even if we don't know why it is true.

    The bigger question here is "can we trust it if we can't verify it by hand."
  • by sPaKr ( 116314 ) on Tuesday April 06, 2004 @10:40PM (#8788243)
    We trust everyone, Trust but verify. Why should machines be any differnt. If I ask you to do a math problem do I trust you with the correct answer, sure I do, but I still verify. I preform the calculation my self, and allow others to preform the calculation. If we all come up with the same result we most likly have the correct answer. We must be careful not to allow the problem to define the solution method as that allows for us all to preform the same mistakes. Just like the Seti@home and distributed crack peopel found out that not everyone can be trusted that the only thing todo is have many people test the same key space no one should trust only one answer but rather develop consenous
  • by cgenman ( 325138 ) on Tuesday April 06, 2004 @10:43PM (#8788278) Homepage
    Don't we already have tons of resources devoted to verifying the accuracy of a computing environment? If we verify the logic and accuracy of the computer, much like the axioms of mathematics, cannot we then say that the resulting proof must be valid?

    Of course, if it can't be understood by humans, it doesn't really help anything, as one of the main points of proving things is to gain insight into how to prove other things. But wouldn't building the system to proove proofs be itself a different but valuable tool?

  • by Anonymous Coward on Tuesday April 06, 2004 @10:49PM (#8788322)
    You know what? I work for Intel. My company paid 1/2 billion dollars replacing Pentium chips with that bug that nobody would ever notice except maybe six people. Software vendors get away with murder. They don't even have to replace their product even if it is not fit for it's stated purpose. The FDIV bug was a typo in some kind of table. Doesn't that make it software in a way? One little bug years ago and still people laugh about it (SCORE:4, Funny).

  • once upon a time, only advanced mathematicians knew calculus, but now we learn it in high school. Just wait until warp theory is an entry level college engineering course

    Once upon a time, the majority of adult males knew how to trap a rabbit (or similar creature), gut it, skin it, start up a fire, cook it, and eat it.

    I don't.

    Heck, I couldn't even look up at the sky at night and tell you which way was north.

    Once upon a time, most people could.

    All I'm saying is that the amount of knowledge and skills the average human being can possess will not increase expontentially over time (barring artificial manipulation). We gain new skills as a population and lose old ones.
  • by I Be Hatin' ( 718758 ) on Tuesday April 06, 2004 @10:55PM (#8788368) Journal
    consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time) Now that means that it's *probably* true, but nobody accepts that as mathematical proof.
    On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.
    What's the difference between these two cases really?

    That one claims to be a proof and the other doesn't? You simply can't prove the Riemann Hypothesis by testing trillions of numbers (though if you find one case where it fails, you have disproved it). As a simple example, I can find trillions of numbers whose base-10 expansion is less than a googolplex digits long. Does this mean that all integers have this property? Of course not... So even if all of the calculations are right, you still don't have a proof.

    On the other hand, the classification of finite simple groups does claim to be a proof, and if there are no errors, it is a proof. You're right that there are probably errors, but these may be only minor errors that can be fixed. At least no one seems to have found evidence that the proof is completely flawed yet. But it's certainly possible that someone will find an insurmountable error in one of the proofs. There have been cases of propositions that were "proved" true for more than 80 years before a counterexample was found.

    What's the difference between these and relying on computer proofs that are, again, *probably* right?

    Again, it depends upon what the computer is trying to show. The computer proofs I'm familiar with are ones where the methods are documented, it's just that the computations are too tedious to do by hand. So you can read the proof and say "modulo software bugs, it's a proof". And then it works the same as science: anyone who wants to can repeat the proof for themselves, and see that they get the same answer. As more people validate these results, the likelihood of bugs goes down exponentially, and the likelihood of the proof being accepted increases.

  • by Anonymous Coward on Tuesday April 06, 2004 @11:02PM (#8788409)
    Yes. Mathematics is just Symbol Manipulation. And so is science, and poetry, and everything else written.
  • by djplurvert ( 737910 ) on Tuesday April 06, 2004 @11:23PM (#8788552)
    If you are still trying to learn how to add in calculus class then there are bigger problems.

    It is one thing to deride the use of cacluators because they are crutches, it is another thing altogether to deride them because you are a technophobe.

    Technology is extremely useful in aiding understanding and exploration of mathematics. If you eliminate calculators and computers in schools, even high schools, you may hurt the chances of your students to succeed in an increasingly technical world.

    As usual, I think the correct response is a balanced response. Demonstrate how to use technology as a tool and provide assessment that requires demonstration of ability without the use of a calculator as a crutch.

    In a calculus class, for example, divide the exams into two parts, or alternate between calculator allowed, and calculator disallowed exams.

    Of course, this has little to nothing to do with the nytimes article but it always seems to come up in these discussions.

    plurvert
  • by eightheadsofdoom ( 25561 ) on Tuesday April 06, 2004 @11:24PM (#8788555)
    That's exactly the case. I had Ken Appel as a professor, and he mentioned the 4-color theorem a couple of times. As he put it, the math wasn't intensive, but actually doing the work the computer was able to do would have taken an army of grad. students years to finish. The way he saw it, the proof was understandable, just extraordinarily, arduously long. That's when they decided to use a computer to solve the problem. Unfortunately, there are still many pure mathematicians who shun computer-based proofs because they (or their grad. students) are not working things out with their own pencils. Unfortunately, I don't think that's a problem that's going away, but I do think it opens up some interesting doors such as writing program A to prove a theorem, and then haveing to prove program A's correctness, for which you write program B and so forth.
  • by Rosyna ( 80334 ) on Tuesday April 06, 2004 @11:27PM (#8788585) Homepage
    See the problem is humans can only program a computer to do something the human coding it knows how to thoroughly describe. It is also why you won't see a computer naturally speaking, showing emotions, or doing anything else we cannot describe today.

    It is amusing to see computers being used as proof bringers when you usually need a proof to program the darn things.
  • Re:identity (Score:1, Insightful)

    by Anonymous Coward on Tuesday April 06, 2004 @11:35PM (#8788636)
    take A!=A as an axiom, and you have a whole new field in mathematics.
  • by adamruck ( 638131 ) on Tuesday April 06, 2004 @11:40PM (#8788658)
    um.... your wrong about that... and alot of people dont understand this either.

    all of our knowledge is based upon assumptions. For instance when it comes to the number systems... we have a set of assumtions that we say are true.. and there is no arguing about it. For instance in the binary system. We say the language only consists of 0 and 1. We say there are two operators and define there operations. AND/OR... we say 0 and 0 is 0.... we say 1 and 1 is one. There is NO PROOFING THIS... its a given. We can PROOF things like demorgans therom.

    This applies to many many areas... geometry... algebra.. calc... logic... etc. There are a list of assumptions that are given to be true. You could argue that calc is just an extention of algebra and geometry... but thats another matter.

    The question I think this article is asking ... is can we program this truths into a computer... give a program a problem... have it randomly(or not for the ai people) apply some theory.. and get an actual answer?

    In my own personal opinion, I think this would work. I think if you took it far enough.. giving information about math... chemistry and physics and electronics.... I think it would be possible to have a computer design a better computer.
  • Re:indeed (Score:2, Insightful)

    by LostCluster ( 625375 ) * on Tuesday April 06, 2004 @11:42PM (#8788671)
    Then again, how can we verify the results claimed by a human scientist? Usually, it's by documenting the step-by-step process so that the entire experiment can be replicated.

    So, while one computer's output might certainly be possible to contain a mistake, like the early-generation Pentium computers with their legendary division table mistake, if several different computers using drastically different hardware all give the same output, that should be enough cross-verification to say that the problem is solved correctly.
  • by CommieOverlord ( 234015 ) on Tuesday April 06, 2004 @11:51PM (#8788744)
    Because if you do that you've failed to do two things:
    1. Failed to demonstrate that the observed phenomenon is consistent across all possible sets (up to infinite size) of oranges, instead of the one set you have.
    2. Failed to demonstrate that the oberved phenomenon is consistent across every possible method of stacking.

      What you've done is shown that for a subset of the sets of oranges and a subset of the sets of stacks the pyramid is better.

      A mathematical proof shows something is true for every single element of the sets.
  • by Tom7 ( 102298 ) on Tuesday April 06, 2004 @11:51PM (#8788745) Homepage Journal
    because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has

    Sure we do -- typical theorem provers spit out a proof that can be checked by hand (god forbid), or else checked by a simple procedure. Understanding that a theorem prover is implemented correctly is tough, but understanding that a checker is implemented correctly isn't. I trust such proofs more than I trust hand-checked proofs, because humans are more susceptible to mistakes than computers are.
  • by Forgotten ( 225254 ) on Wednesday April 07, 2004 @12:01AM (#8788804)
    Most people probably knew the "four quadrants of knowledge" thing, but didn't know they knew (DK). That is, they have enough to put it together, but have probably never put it into words before. Intuitive knowledge is one way of putting it. The bulk of most people's knowledge probably falls into this category, which is fine - language is often overrated as a conduit of knowledge (not that it's not incredibly useful and important, but other means exist and are constantly used).

    I don't actually believe particularly firmly in that model, though, because I don't agree with the D-K dichotomy that underlies it. It's your usual classical Greek quadrant, which means it springs from a dual dichotomy, or in this case a dual-aspect single one. Dichotomy (or even one-dimensional spectra) is not the only way to look at things, but it is a dangerously compelling model - that is, when people have been presented with a dichotomy, they typically become unable to consider without it. And the defence of a dichotomy is usually a tautology - I mean it's obvious, isn't it, you either know something or you don't? ;)

    Still useful and interesting if you can get it out of your head when needed, though.

  • by starseeker ( 141897 ) on Wednesday April 07, 2004 @12:26AM (#8788972) Homepage
    'Can we trust the darned things?' 'Can we know what we know?'

    It's not an issue of can we trust them, at least not in general. (We won't go into the question of current machines - I'll agree they're generally not there for rigorous proofs.) We're going to have to either trust some form of computation aid in proof work, or throw up our hands and abandon the field - the human brain and lifespan impose definite limits beyond which we cannot go without aid, and since I can't think of any limit human beings have willingly accepted as a group somehow I doubt this will be the first. So, instead, the question should be

    "How do we create computers we can trust?"

    If that is impossible, then that's it. Mathematics will be come like experimental high energy physics - 20 years effort by 100s of people to achieve one result. But I'm not ready to concede that its impossible. I know it is provable that computers can't solve all problems in general, but the same proof indicates humans can't either. The question I'm curious about is whether the behavior of a computer is too general to be attacked by useful proof methods. Most actions taken with a computer assume a definite action and a definite outcome (spreadsheets and databases, for example, do not do novel calculations but perform the same operations on well defined data.) Mathematical proof is a different question, but the ultimate question is whether a properly designed and built computer (i.e. built as rigorously as possible in a technical and algorithmic sense) would be completely unable to handle problems that are interesting to human beings in the proof field. That is a completely different question from generality statements, and from the standpoint as computers as a trustworthy tool I think it is the more interesting one.
  • Wrong question. (Score:3, Insightful)

    by ilyag ( 572316 ) on Wednesday April 07, 2004 @12:30AM (#8788998)
    The question should not be whether computers can calcualte flawlessly - that's obviously wrong. The question is whether the probability of all the different configurations of computers consistently giving the same wrong answer to a problem is greater than the probability of all the human mathematicians agreeing on the same wrong answer. To me, it seems obvious that the computers are better off...
  • by bluGill ( 862 ) on Wednesday April 07, 2004 @12:32AM (#8789014)

    Insightful up to the last paragraph. Sure I can only learn so much (knowledge and skills). Becoming good at playing the guitar means a trade off, and perhaps by making that choice you don't learn to skin rabbits, fly airplanes, and so on. Yet some have done several of the above. There is enough knowledge that you cannot learn it all. As a population we do not lose old skills, at least not near as often as we learn new ones.

    Some things take years to learn, some take minutes to learn and years to master, and some take just minutes to learn. Some are worth teaching everyone (reading for example), some are worth learning despite no practical use (playing guitar for example), and some are hobbies that a few people learn for the fun of it (skinning rabbits). Few skills are lost over time though, and now that reading is universal less are lost because those who know can write down for latter use.

    Look around and you will find a few people who can tat, make chain mail, build a bark canoe and so on. All useless skills in this modern world, but kept alive because someone made it their hobby. I've seen books on all of the above, and many more.

    If the doom sayers worst perdictions come true and you are one of the few people to survive [whichever disaster is in vogue today] you can go to a library and get books for the skills you need. Find someone to have kids with, and you are likely to pass reading and simple math onto them, and they to their kids. Eventually civilization will return with population, and your many times great grandkids will have an advantage in that they can read our books to tell them what works so they don't make the mistakes we did.

  • by benna ( 614220 ) <mimenarrator@g m a i l .com> on Wednesday April 07, 2004 @01:42AM (#8789471) Journal
    There was a young man who said, "though
    it seems that I know that I know,
    what I would like to see
    is the I that sees me,
    when I know that I know that I know."

    Great poem, Alan Watts used to quote it alot when talking about the illusion of the ego.
  • by ingenuus ( 628810 ) on Wednesday April 07, 2004 @05:01AM (#8790211)
    "On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question."
    -- Charles Babbage (1791-1871)

    Computers only do what they are told (excepting "hardware failure", which is not the topic).

    Shouldn't the validity of computational proofs be able to be determined by proving the meta-logic of the solver?
    i.e. proving that a strategy for finding a proof is valid (and therefore trusting its results).

    Maybe those wary mathematicians are just unaccustomed to working on a problem meta-logically, and prefer to find proofs directly themselves (with the meta-logic being defined solely within their own minds)?

    In such cases, perhaps peer review should not require human verification of a computational proof, but rather another independent meta-logically valid computational proof?
  • by azaris ( 699901 ) on Wednesday April 07, 2004 @06:33AM (#8790459) Journal

    Simple. The student should know the value of pi to a certain number of significant figures.

    Why? Einstein was once asked how many digits of Pi he could remember. He replied "maybe three or four". The interviewer then exclaimed: "But Sir, some people have memorized hundreds of digits!" Einstein's reply was: "Ah, but I know where to look them up if I need them."

    With a calculator, they're just typing in 3*pi and getting an answer with 8+ significant digits. Without a calculator, they're actually calculating the answer as close as they can based on how many significant digits they know.

    Wrong, wrong, wrong. If the answer to some problem is 9 Pi then the answer to that problem is 9 Pi, not 28.274333882308139146163790449516.

    That's a better test of who's mastering mathematics.

    Memorizing Pi and doing multiplication on paper has almost no relevance at all to "mastering mathematics".

  • more than people (Score:3, Insightful)

    by hak1du ( 761835 ) on Wednesday April 07, 2004 @08:21AM (#8790824) Journal
    The mathematical literature is full of errors, oversights, invalid proofs, unstated assumptions, and probably even a certain share of deliberate fraud. See Lounesto's misconceptions of research mathematicians [www.hut.fi] for one expert digging into the mathematical literature.

    Computers are far better at ferretting out oversights, missing assumptions, and making sure that every t is crossed and i is dotted. If a software system for doing proofs has shown itself to be fairly reliable on a bunch of samples, I'd trust it a lot more than I'd trust any working mathematician to carry out a complex proof correctly.

  • peer review (Score:2, Insightful)

    by Kent Simon ( 760127 ) on Wednesday April 07, 2004 @08:46AM (#8790994) Homepage
    People attempting the proofs are no less vulnerable to making mistakes than those who wrote computer software to develop the proofs. Thus a system to verify the proofs should be in place for both groups of people. Any person working on a doctoral discertation will have their work reviewed by a board. Any proof generated by a computer shout too be reviewed by a board before it can be considered correct. I don't see where the issue is.

The one day you'd sell your soul for something, souls are a glut.

Working...