The End of Mathematical Proofs by Humans? 549
vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
Science by AI (Score:2, Insightful)
Critics Reaction... (Score:5, Insightful)
So basically what they are saying is that if the proof is too long to be checked, then it is flawed? WTF?
Creativity (Score:5, Insightful)
I for one welcome our new robotic theorum proving overlords.
If computers could write proofs... (Score:5, Insightful)
As a tool to produce vast quantities of precise logical porridge quickly, computers have no equal in today's world, yet that is not what real mathematical proofs should be about.
Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.
Seems simpler to prove proffs-by-computer (Score:3, Insightful)
To do that.... well, just make sure the program was designed by a correct computer.
Re:Critics Reaction... (Score:5, Insightful)
If you can't independently examine and verify your "proof" then how can it be considered proof of anything?
Re:Critics Reaction... (Score:4, Insightful)
Uh, I doubt it. (Score:1, Insightful)
Anyway, the entire point of a proof is to be read by humans. This isn't like, say, a computer program, where the point of the proof. A mathematical proof is something humans create in order to convince other humans of things beyond all doubt. The four color theorem proof was pretty closely watched; I guarantee you if you just go up to someone and say "statement [i]blah[/i] because my expert system verifies it to be the case" there is going to be some fucking doubt still there.
Re:If computers could write proofs... (Score:2, Insightful)
So you're saying that even a theoretical sufficiently advanced computer would be unable to match the logic and creativity of a human being? I think a simple brute force counter written in Mathematica (unlimited integer lengths) whose output was executed by a CPU would prove you wrong.
Computers can separate wheat from chaff. That's what AI is all about. No, seamless human interaction is still a ways off, but as for number crunching, the ability to compare large, disparate sets, and logical "thinking", computers are vastly more well-suited to the task than humans in every respect.
Computer _aided_ proofs (Score:5, Insightful)
Computer proofs, like the graph color proof, are not proofs that are completely generated by a computer. The computer is merely used to brute force a fairly large number of 'special' cases which together account for all cases. The construction of the proofing method is and will remain human work, lest we create AI that matches our own I.In short, they are computer aided proofs only.
Further and more importantly, at this point we do not have and are not likely to have a machine that can prove any provable theorem (and fyi, not all truths in mathematics are provable!).
Re:Critics Reaction... (Score:1, Insightful)
Are computer-aided proofs really proof? (Score:3, Insightful)
No progress without understanding (Score:5, Insightful)
Although I discovered easier ways to do the arithmatic, I still knew the underlying theory of the equations & what the numbers were actually doing, not just what a computer was telling me.
Students should learn this, they are the basic building blocks of a science that dictates pretty much everything on this planet & although they won't have a use for everything they are taught they will have enough knowledge to "problem solve" which is what most of high school maths is designed to do, it trains our brains to think logically & be able to work out complex problems.
How are people going to be able to further phsyics, medicine, biology if they get into their respective tertiary courses without understanding the basic principals of all science & have to learn it all over again??
Or what about when computers just won't work & things have to be done by hand??
Its fair to integrate comuters into maths but not at the expense of the theory that makes us understand how things work, we should not put all our faith in technology just because its the easy thing to do.
Re:Science by AI (Score:5, Insightful)
Re:Godel/Turing/Cohen... (Score:3, Insightful)
Stupid post name (Score:1, Insightful)
The article is about computer-aided proof, where the human makes the essential proof decisions while letting to the computer the tasks of properly formulating and performing very simple, yet laborious steps (which would otherwise draw the attention of the human from essential stuff).
You **cannot** actually prove a really complex theorem in a fully automated way, because the reasoning is non-monotonic, which implies huge computational complexities. Machines are not yet good at being creative.
Re:Critics Reaction... (Score:4, Insightful)
I could assert that 2+2=4. You may believe me, but have I really proven it to you? Not yet, so you don't need to believe me. If I instead say that the the cardinality of the union of two disjoint sets, each of cardinality 2, is 4, then I've (sort of) showed you that 2+2=4, assuming you accept my definitions of disjoint sets, set union, and set cardinality (which presumably I've proven elsewhere, or taken from some other source). Now do you believe me that 2+2=4?
I could assert anything. You may or may not know if it's true. A proof is just something to back up my assertion and convince you that I'm right. Hence, if a proof is unintelligable, it's pretty darn worthless.
Re:Creativity (Score:5, Insightful)
To some extent that's true, except in areas where human understanding has reduced mathematical proof to a mechanical process. For example, verifying algebraic identities, or even geometric proofs. A more advanced example is the Risch algorithm for elementary integration. It amounts to a proof that an integral either is or is not expressible in terms of elementary functions. Eventually we come to understand an area to such an extent that we can implement mechanical algorithms and move on. The proper role of the computer is to carry out these algorithms, so that we can use them to discover something else.
Blowhard critics could use a logic course... (Score:4, Insightful)
Because if there's one thing that humans are better at than computers, it's performing large numbers of repeated steps. Flawlessly.
here y'all go again, panicking... (Score:3, Insightful)
It's easy to check a proof. It's hard to come up with a proof. Computers are great at checking proofs - all the program needs to do is verify whether the steps are logically correct or there's a discrepancy. Coming up with a proof, on the other hand, is a very hard task (being NP-complete, unless defined in a certain way) and thus usually requires a human (or sometimes, a lot of humans) to work on the problem.
A computer would not be able to come up with new principles of mathematics in order to tackle a given problem, it would only try to use every trick that has been discovered to the point of creation of the program (of course that doesn't have to be the case, but my point is that human intervention would be required to "teach" the computer about the new concept so that it would try to use it for the proof)
That is not to say that computers are useless in proofs. Obviously, they're often used as assistants in proving something-or-other, but there's also a direction in computer science where your computer would take a program that you wrote in a certain manner, and prove certain properties about it, e.g. that it is not possible to get out of array bounds in your C program...
*yawn*
time to sleep
Re:Critics Reaction... (Score:5, Insightful)
Re:Consider the source (Score:5, Insightful)
Secondly, the claim that a magazine that opposes the death penalty and supports gay marriage is right-wing rag (which presumably you meant in US terms, is kinda amusing.
The Economist, correctly stated, is a liberal magazine. It supports liberal economics and liberal social policy. Unfortunately the word 'liberal' in the US has been badly distorted.
Re:Consider the source (Score:1, Insightful)
I am a mathematician and... (Score:2, Insightful)
Re:If computers could write proofs... (Score:2, Insightful)
Getting a program working, is the first step. Making it optimize its output, is the next. I think you're going to see better proofs from computers.
Re:Blowhard critics could use a logic course... (Score:5, Insightful)
Anyway, the problem isn't the ability for the computer to perform flawlessly, the problem is in our ability to accurately specify to the computer what we want it to do. It's the whole "fast working idiot" thing, mechanical perfection isn't much good if we wind up just directing the computer to perfectly, flawlessly do the wrong thing very quickly. We have enough trouble convincing ourselves real-world software is going to do what we wanted it to after it compiles; and in that case we at least have the advantage we can run it and test it to see if it does what we expect. With software-generated proofs, not so much, since the program IS the test.
I think computer aided logic can be useful if we just think of a proof-generating software program as a funny, mechanically verifiable sort of abstraction, but you find yourself making an argument that rests on assuming that a computer program you wrote does what you think it does then this is problematic.
Re:Godel/Turing/Cohen... (Score:3, Insightful)
Turing applied Godel's work to Turing machines and showed that there are non-recursive functions. No talk of formal systems here.
I have no idea why you're mentioning Cohen, since his most famous work had to do with showing that there are models in which the Continuum hypothesis fails and models in which the Axiom of Choice fails. Not related to formal systems at all. Just Zermelo-Fraenkel set theory.
All of this is tangential. Assuming a generous reading of "formal system" as "formal logic, your alleged "limitations" to formal systems come down to a simple corollary of Godel's completeness theorem: Given a fixed set of sentences S, there is a proof of A from S iff A is true in every model of S.
Re:Science by AI (Score:2, Insightful)
We'll have to agree to the result and learn it, without knowing if it is true. Some proof will become impossibly large and complex, and with it will bring theorems that are more complex but still understandable... yet further proofs will be based on those theorem, and so on.
So yes, I think the theorems proven this way will be taught in highschool, but not the proofs in theirselves. After all, everyone has a limit, we just need to find the global limit to humankind, which comes closer and closer with the use of the computer.
Re:Critics Reaction... (Score:3, Insightful)
Re:Science by AI (Score:5, Insightful)
Re:The best math is always elegant. (Score:5, Insightful)
Re:Critics Reaction... (Score:2, Insightful)
Re:Critics Reaction... (Score:5, Insightful)
Basically, at a certain point, you just have to "believe" that your axioms, logic, whatever you call it, is consistent. Because to prove it, you'd again need axioms, a logic, etcetera, ad infinitum.
The proof is sometimes the easy part (Score:2, Insightful)
Perhaps the most difficult part in my own work as a mathematician is understanding an area well enough to create meaningful, interesting conjectures. In many cases, the proof falls out immediately. This obviously isn't universally true (e.g. Fermat's Last Theorem), but the mathematicians that I have the most respect for are the ones who are able to see the connections between seemingly unrelated areas and ask the right questions to pull it all together.
As other have said, computing may be able to verify an existing proof, but I think the real creativity and beauty in mathematics is often found in the initial formation of the conjecture.
Re:Critics Reaction... (Score:3, Insightful)
One should of course ensure that the programm is correct and all as good as possible, but I don't see much difference between a proof verified by a bunch of independently written computer programms and a proof verified by a bunch of humans.
Re:If computers could write proofs... (Score:4, Insightful)
what about the software (Score:4, Insightful)
Secondly, how fast does software progress ? Suppose we all had computers 60 billion times faster than we do now. What would we do with them ? run SWING based java applications with tolerable responsiveness, play solitaire faster, run doom 5... [although the frame rate might be a bit low] ok... great,
Intelligence and computing power are orthogonal concepts: suppose you communicated with aliens who were a 100 light years away, would they be less intelligent because it too 200 years to get an answer. Anything you can do on todays supercomputers, you can do on pocket calculator [with enought memory].. it just takes longer.
Lastly, in the long run, computers wont outgrow our brains, they will be integrated with our brains.
Great mathematicians (Score:3, Insightful)
An insight that enabled a five page proof of Fermat's Last Theorem using nothing more than high school algebra would be a major result, even though much more complex proofs exist.
Re:Science by AI (Score:3, Insightful)
Surely you mean it's impossible to create a turing machine that will determine if all expressions are true or false - ie, that there will always exist an expression that cannot be proved or disproved? I don't see how this prevents a computer proving or disproving a statement where such a proof exists.
Also I don't see how a turing machine is different to a human. We can't prove whether the continuum hypothesis is true or not in ZFC, that doesn't mean that we can't prove things at all.
Metaphor (Score:5, Insightful)
This takes a ridiculous amount of pattern recognition skill (which is one area where computers tend to be outperformed by all comers) and the ability to find new ways to abstract data. A computer could possibly come up with an idea like more-than-3-dimensional space on its own, but I'd be very surprised if even the best one could think of something like topology or tensors on its own.
Production of unusual metaphors for things we thought we knew is a major driving force for the most important mathematical developments. It's not something I can see computers managing at any time in the near future.
Learn this! (Score:3, Insightful)
Proof generation can only be partially automated. It still requires massive human intervention, from choosing what to prove and then how to go about proving it. When you have computers automatically generating a proof of, say, Godel's Incompleteness theorem, it is actually a computer that was fed an intricately encoded version of of the theorem, along with some form of hint as to how to go about proving the theorem.
Not to mention that there is little use in a computer re-proving something that has already definitively been proven.
The difficulty arrises from the fact that there are absolute limitations to what computers can do. These limitations have been proven many decades ago... BEFORE THE CREATION OF ELECTRONIC COMPUTERS! But to the ignorant, these theoretical limitations do not matter... simply because they do not understand how absolute they are.
So again, learn this, you will time and again hear AI snake oil salesman spouting off crap about automating mathematical research and automating programming, etc. (Both are in a sense equally difficult as proofs can be seen as programs and visa versa.) In reality, at best, automated theorem proving is actually a tool that can be used to help mathematicians and computer scientists to do what they have been doing already. The level of automation varies, but in general is very low.
A human still has to drive the entire process, similar to how a car automates walking, but doesn't automate where to go and how to get there.
Re:Science by AI (Score:5, Insightful)
This actually is more about the limitations of logic than the limitations of computers. Indeed, Godel's Incompleteness Theorem has nothing to do with computers--it is a proof that in any system of logic (that meets some very broad criteria) there must exist statements that are true but that cannot be derived from the postulates of the system by any sequence of logical steps. Adding additional axioms does not solve this; there always remain unprovable propositions. This limitation applies to proofs by humans as well as proofs computers. However, the fact that there are some theorems that cannot be proved does not mean that there are not many others that can be.
However, the fact that there are some truths that are literally inaccessible from the postulates certainly suggests that there may be others that are accessible only by a very large number of steps, effectively requiring computers. I wonder if anybody has ever attempted to prove this?
Re:Science by AI (Score:3, Insightful)
In the case of the continuum hypothesis, mathematicians are hoping to come up with and axiom that "sounds true" and makes sense that will settle the question. This is Godel's legacy - that we can think about whether an axiom is "right", and that mathematics is no purely an excercise in manipulating symbols by fixed rules.
That's the difference between humans and today's computers. We can't prove the continuum hypothesis with the existing axioms either, but perhaps we can find a "good" axiom that will resolve the question.
That being said, a great deal of mathematics is manipulating symbols by fixed rules. Originally we talked about "computations" - manipulating numbers. Then we moved on to algebraic equation solvers and other symbol manipulation (a la Mathematica or MathCAD), and have now extended it to the range of mathematical proof. This is an important change in mathematics, but not the end human participation in it.
Re:Godel's Theorem prevents this scenario (Score:2, Insightful)
Reason: a theorem is a statement which has a proof. A proof is a finite sequence of logical statements. You can order all sequences of logical statements by lexicographic order within the sum of the number of variables used and the number of statements; then simply have a computer run through all sequences of logical statements in that order, checking each one to see whether the statements in the sequence are either axioms or follow by the rules of logic from the axioms, and checking whether the final statement is the statement you want to have proved. The computer will eventually reach a proof of whatever theorem you asked it to prove. Note that if you asked it to prove a statement which is not a theorem, it will never stop. Goedel's theorem tells us that there are some statements where not only will the computer fail to prove the statement true, it will also never prove the statement false.
The problem with trying to do this in practice is that for any interesting problem the shortest proof might not fit in a telephone directory - and the computer would be trying to find it by checking through all sequences of that size. It would take far too long.
The theoretical problem you refer to is whether one can write a program which will correctly classify all statements as true or false - this is not possible; some statements are neither. If you allow it to classify as 'neither' as well, it is still not possible - the sketch above will never classify any statement as 'neither' because with such statements it never halts, for example. Turing proved that no program can correctly classify all statements.
Re:Science by AI (Score:2, Insightful)
Also, there is no general agorithm for proof searching: we can't just enumerate all true predicates in a given and we can't (generaly) determine if two predicates are equal.
So computer is nowhere close to substitute a human.
Re:Critics Reaction... (Score:3, Insightful)
Read The Man Who Mistook His Wife For A Hat [amazon.com] or one of the many other amuzing (and amazing) books about our imperfect minds. If there can be a guy (apparently self-conscious, intelligent and normal), who can't tell the difference between his life-long female companion and a hat, how can you be sure that a particual mathematician can tell the difference between a correct step and a marginally and deceitfully incorrect? The answer - you can't. To err is human. Human verification is not inherently better than computer verification and in the future (as computers improve) it will gradually become worse.
Re:Critics Reaction... (Score:2, Insightful)
I do not suggest publishing only formalized proofs, or teaching mathemetics using formalized proofs. But if you have a "library" of formalized theorems (similar to having a library of subroutines), then it should be possible to translate your proof to a formal text that is not long, boring or unclear. And, in the end, you could add your new proof to the library!
I believe that in time it will become the common practice, to publish both an "informal" and a formal proof.
The realization of the striking similarity between proofs and algorithms will eventually make the complete formalization a natural part of the mathematical culture.
Re:Seems simpler to prove proffs-by-computer (Score:1, Insightful)
A proof, OTOH, will show you (to a high degree of probability, since humans are fallible) not only that they are equal, but also *why* they are equal. *That* is the importance of proof!
Re fallibility: About 1/3 of the mathematical papers that I've refereed have contained a fatal flaw. Just because someone asserts that they have a proof doesn't make it so.
Re:Science by AI (Score:3, Insightful)
Experimental Mathematics (Score:2, Insightful)
As an example of this technique was investigating the local clasification of maps from the plane to the plane, which I took part in at Liverpool Uni. Here we were looking for special cases in a four parameter family of maps. I.e. there were four numbers you could change and each set of parameters gave a different mapping. Using visulisation programs on a computer I managed to find a special case which had not been found before. When I showed the pictures to the rest of the dept no one beleived them as they seemed to contradict the theory. Was is a bug in the software, a misleading piciture or a bug in the theory?
In the end the experimental pictures proved to be correct and the theory was improved to account for it, involving lots of pen an paper mathematical work.
This sort of investigation is a long way from the grand formal proofs in the article. But perhaps more indicative of the way that computers can be used to inform mathematics. More like the experimental physics/theoretical physics devide. In this case is generation of a counter example.
One of the big problems with using computers is infinity. Most interesting problems in mathematics involve an infinite number of cases. Fermat: a^n+b^n=c^n four countably infinite numbers (integers). In the map case above it was four real numbers (uncountable infinities) Furthermore theory had proved that the underlying problem (all posible maps), which involved a countably infinite number of parameters which could all take an uncountable infinite number of values, could be reduced to just four parameters. Even with Moore's law its going to be hard to check all those cases. In the four colour theorem there is an infinite number of pattern to considered, the real smart bit in the theorem was showing that these could be reduced to just a finite number of special cases. In the clasification of simple groups we have several infinite of groups and a few (26) special cases.
Infinity will also show its head in formal methods. More to the point combinitoric complexity. As the number of symbols in the formal sequence increases so the numeber of cases you need to consider grows rapidily, hints of NP-hard problems here and testing for primeness. This makes thing hard when you start with a few axioms and try to find all the logical consequences.
Personally I don't have a problem with either of the proofs mentioned in the article. Both follow the same pattern, 1) reduce an infinite number of cases to a finite number of cases, 2) then check those using a computer. I can check the mathematics of 1) and write my own program to check 2). Hopefully the code's open sourced as well. Doesn't mean their particularly satisfying proof, or particually elegant, 10,000 special cases does not really flip my mathematical switch. Classification of simple groups does have a more elagant result, and deeper consequeces. Wilke's proof for Fermat, and the more interesting underlying conjecture, though long, is much more elagant, and more to the point much deeper opening up a whole new field.
End of the day, its another one of the typical stuff we get in the press. Nothing much new here. This debate on four colour theorem's computer proof has been around for a good while now, and mathematics has for the most part gone on as normal.
Re:Computer _aided_ proofs (Score:3, Insightful)
(and fyi, not all truths in mathematics are provable!)
This is wrong. It is based on a misunderstanding of Godel's proof that has been popularized by various authors.
All truths in mathematics are provable. However, assuming our system of mathematical axioms is consistent, there are some statements that are neither true or false. These statements have not been determined by the axioms. Furthermore, for any computable set of mathematical axioms we choose, there are always some statements which are undecided. Godel proved this by coming up with a technique that always generated one of these undecided statements.
Insight (Score:1, Insightful)
Proofs give correctness. Good proofs give insight.
Insight is what drives math forward; a clever and insightful proof is vastly better than a merely correct one.