Are Computers Ready to Create Mathematical Proofs? 441
DoraLives writes "Interesting article in the New York Times regarding the quandary mathematicians are now finding themselves in. In a lovely irony reminiscent of the torture, in days of yore, that students were put through when it came to using, or not using, newfangled calculators in class, the Big Guys are now wrestling with a very similar issue regarding computers: 'Can we trust the darned things?' 'Can we know what we know?' Fascinating stuff."
Rumsfeld, anyone? (Score:5, Funny)
Reminds me of Rumsfeld [about.com]... "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."
Rumsfeld, anyone?-Overload. (Score:3, Funny)
BOOM!
"Cleanup in aisle 10"
Re:Rumsfeld, anyone? (Score:2, Funny)
KFG
Re:Rumsfeld, anyone? (Score:5, Informative)
KK | KD
___|____
|
DK | DD
So, you can:
Know that you Know something
Know that you Don't Know something (the second most common)
Don't Know that you Don't Know something (most things fall in this category for most people)
Don't Know that you Know something (the most interesting of the categories)
Big, huge, red tape operations can easily fall into the latter category (DK)... since it is rather easy for a group to obtain knowledge, yet be unaware of it [political commentary omitted].
Re:Rumsfeld, anyone? (Score:4, Funny)
Re:Rumsfeld, anyone? (Score:5, Interesting)
Men are four:
He who knows and knows that he knows; he is wise, follow him.
He who knows and knows not that he knows; he is asleep, wake him.
He who knows not and knows that he knows not; he is ignorant, teach him.
He who knows not and knows not that he knows not; he is a fool, spurn him!
I'm sure it's coloured my attitude to supporting users:-)
Re:Rumsfeld, anyone? (Score:5, Funny)
My boss, who knows that he knows not;
But pretends that he knows that he knows;
And he's so convincing at it, that;
People who know that he doesn't know;
Beleive that he knows that he knows anyway.
Re:Rumsfeld, anyone? (Score:5, Interesting)
Godels theorem pretty much says that there are things that you can NEVER know are true or false...and that in some cases you can PROVE that you can never know them.
This leads to some other states that your knowledge can be in:
CERTAINTY OF UNCERTAINTY: In some cases you can know (by solid mathematical proof) that you can never know some particular thing. For example. we know with absolute certainty - that you can't solve the 'Halting Problem' (to prove whether an arbitary computer program will eventually halt or whether it'll run forever). That's something we KNOW we'll never be able to do no matter how smart we get. That's not the same as KK/KD/DK or DD. It's tempting to lump this in with KD - but in the case of simply being aware that we are ignorant of something, we might take steps to resolve that ignorance. In this case, we know that this is something we cannot EVER know.
CERTAINTY OF POTENTIAL CERTAINTY: If you know the current "largest prime number" - then by definition, you don't know of a larger prime - but you DO know that you could definitely find a larger prime if you really wanted to.
UNCERTAINTY OF POTENTIAL CERTAINTY: Then there are other things that you might not know whether they are knowable or not - such as the proof of some of the classic mathematical problems. Before Fermat's last theorem was proved, nobody was sure whether it could be proved or not.
Re:Rumsfeld, anyone? (Score:3, Insightful)
it seems that I know that I know,
what I would like to see
is the I that sees me,
when I know that I know that I know."
Great poem, Alan Watts used to quote it alot when talking about the illusion of the ego.
Re:Ok I am always confused about the difference. (Score:5, Insightful)
I don't actually believe particularly firmly in that model, though, because I don't agree with the D-K dichotomy that underlies it. It's your usual classical Greek quadrant, which means it springs from a dual dichotomy, or in this case a dual-aspect single one. Dichotomy (or even one-dimensional spectra) is not the only way to look at things, but it is a dangerously compelling model - that is, when people have been presented with a dichotomy, they typically become unable to consider without it. And the defence of a dichotomy is usually a tautology - I mean it's obvious, isn't it, you either know something or you don't?
Still useful and interesting if you can get it out of your head when needed, though.
Re:Ok I am always confused about the difference. (Score:4, Informative)
Canada's former prime minister's "proof" (Score:3, Funny)
(PM Jean Chretien [bayrak.ca], when asked what kind of proof he would need of weapons of mass destruction in Iraq before deciding to send Canadians along on the Bush invasion-September 5th on CTV news)
Can someone elaborate on... (Score:2, Interesting)
I guess the obvious Monte Carlo simulation doesn't constitute "proof," but still, what exactly is the big question here?
Re:Can someone elaborate on... (Score:5, Informative)
Re:Can someone elaborate on... (Score:4, Insightful)
What you've done is shown that for a subset of the sets of oranges and a subset of the sets of stacks the pyramid is better.
A mathematical proof shows something is true for every single element of the sets.
Re:Can someone elaborate on... (Score:3, Insightful)
Re:Can someone elaborate on... (Score:2)
Re:Can someone elaborate on... (Score:3, Informative)
Re:Can someone elaborate on... (Score:2, Interesting)
If by "see" you mean "conjecture" then, yes, it is easy to see. If by "see" you mean "prove", and thereby catch a glimpse into the fundemental reasons WHY this is the best possible stacking, then it is exceedingly difficult.
How would you go about designing a proof that would take into account every single one of the infinitely many configurations of oranges? Could you even l
Re:Can someone elaborate on... (Score:3, Insightful)
all of our knowledge is based upon assumptions. For instance when it comes to the number systems... we have a set of assumtions that we say are true.. and there is no arguing about it. For instance in the binary system. We say the language only consists of 0 and 1. We say there are two operators and define there operations. AND/OR... we say 0 and 0 is 0.... we say 1 and 1 is one. There is NO PROOFING THIS... its a given. We can
Create vs. Verify (Score:5, Informative)
Re:Create vs. Verify (Score:5, Insightful)
A computer could quite easier come up with a very complex answer simply because it can do more calculations in a given second than a human can. Of course humans take in a lot more variables at a given time so the numbers are actually very opposite but I'm sure you get my point.
I think you test it with progressively more different problems, if the answers come out precise and accurate then you can build your level of trust in the system. Kind of like the process of getting users to trust saving to the server after a bad crash wiped everything because the previous admin was a moron.Re:Create vs. Verify (Score:5, Funny)
Re:Create vs. Verify (Score:5, Funny)
The Ultimate Question?
Yes!
Of Life, The Universe..
And everything?
And Everything.
Yes.
Tricky..
But can you do it?
Who? Tell us!
I speak of none, but the computer that is to come after me.
What computer?
A computer whose merest operational parameters I am not worthy to calculate and yet I will design it for you. A computer which can calculate The Question to the Ultimate Answer. A computer of such infinite and subtle complexity that organic life itself will form part of it's operational matrix. And it will be called.. The Earth.
What a dull name.
indeed (Score:5, Insightful)
Though Dr. Hoare danced around that question a little, presumably that aspect of the project would have to be done by hand, a monumental task to say the least.
Re:indeed (Score:5, Informative)
The resulting proofs are still hairy enough that they have to be checked by machine, but the size and complexity of the proof-checker is much less than that of the compilation toolchain. That means that while there's still some code that has to be trusted, it's much less. Here's my informal scariness hierarchy:
Normal model (you have to trust everything) > type safe languages (you have to trust the compilers / interpreters) > proof-carrying code (you have to trust the proof-checker*).
If you haven't already, you should definitely read Ken Thompson's Turing Award lecture, "Reflections on Trusting Trust" here [uoregon.edu].
* - Pedantry point: If you're talking about Necula's original PCC work, you also have to trust the verification condition generator, which is some fairly deep voodoo. Appel's Foundational PCC addresses this to a signficant extent.
Re:indeed (Score:5, Interesting)
Say, an optimizer is buggy. Say, in a certain line of code in the compiler, there's a '>' instead of a '<' sign. Unfortunately, this causes, under certain, rare conditions, the optimizer to miscompile '<' into '>'. Now, since triggering that bug is rare, it goes unnoticed for a while, and the buggy compiler is installed as default compiler for further development.
Now, at some later time, the bug is found, a developer looks at the code and finally finds the error. He replaces the '>' by '<' in the hope of fixing the bug. Unfortunately, he doesn't recognise that this very comndition now triggers the bug, having the compiler internally replacing '<' to '>' again, replicating the bug again.
Now, the test case will show that the bug is still there, but the developer will simply not find the bug in the code. After all, the bug isn't in the source any more. It's only there in the compiled program. At some later time, though, it might magically disappear by some unrelated change nearby, which causes the bug not to be triggered any more.
Re:indeed (Score:5, Interesting)
Most faults are software problems, not hardware, so having different machines won't help. Further, interestingly, most major software faults (at least, of the sort that make it through serious testing) tend to be conceptual problems, not coding ones, and different people tend to make the same mistakes. This means in practice that even when you have completely independent software implementations (called n-version programming), they're frequently all wrong in the same way at the same times. See the famous Knight & Leveson [mit.edu] paper. [safeware-eng.com]
Re:Create vs. Verify (Score:2)
When something is too involved for a human to accomplish -- isn't this what we have computers for? So the solution is simple. Produce another system to verify the first system's proof. Ideally, this second system should be built by a group completely separate from the first so as not to inherit any design flaws or false assumptions.
The same methodology could be applied to
Re:Create vs. Verify (Score:2)
Just to clarify, do you mean the computer's accuracy in moving the bits around, or do you mean the human who wrote the program didn't make any mistakes?
Re:Create vs. Verify (Score:4, Insightful)
Of course, if it can't be understood by humans, it doesn't really help anything, as one of the main points of proving things is to gain insight into how to prove other things. But wouldn't building the system to proove proofs be itself a different but valuable tool?
Re:Create vs. Verify (Score:4, Insightful)
Re:Create vs. Verify (Score:5, Insightful)
Re:Create vs. Verify (Score:5, Insightful)
I'll give some trivial examples to illustrate:
For a famous example, it would provide a great deal of peace of mind if we could prove that P != NP. It wouldn't matter that we understand that proof so much as that it is *provably true*. If, on the other hand, it is proven false that is just as important (if not more so) and while an understanding of the proof might lead more easily to examples of such, we would know (for certain) that trusting public key encryption over the long run would be a Bad Idea(TM) (for example) and that it is just a matter of time before a polynomial time algorithm is developed.
(Not that such will necessarily be fast, mind you, but we would know it would exist).
For another example--there are certain things that can be inferred if Poincare Conjecture is true for N=3. If we can prove the Poincare Conjecture is true (and it is now thought that it might be) it means things to physicists, even if we don't know why it is true.
The bigger question here is "can we trust it if we can't verify it by hand."
Re:Create vs. Verify (Score:4, Interesting)
OK, yes, that would definitely be worth doing. Various chunks of 3-dimensional geometric topology currently include the caveat ``... if the 3-dimensional Poincare Conjecture is true.''
(and it is now thought that it might be)
Depends who you talk to, of course. Perelman reckons he's got an outline proof of Thurston's Geometrisation Conjecture, which implies the Poincare Conjecture as a corollary. But as I understand it he's not actually proved it, just described how one would go about proving it - essentially saying ``It's over the other side of that hill''. Now Perelman is a very bright lad (cleverer than I am) but the Poincare Conjecture has a long history of almost being proved (my research supervisor (I've just finished a PhD in algebraic and geometric topology) almost did it about twenty years ago, for example, but there was a very subtle and unresolvable flaw in his argument) so I'm going to reserve judgement until the paper turns up in the Annals of Mathematics.
I wouldn't be surprised if the Conjecture turned out to be true, as the corresponding result is known to be true in dimensions 1,2,4,5,... (well, to be picky, the smooth 4-dimensional case hasn't been proved yet, but the topological and piecewise-linear ones have). But equally, I wouldn't be shocked if it weren't - 3-dimensional topology is a weird subject, as there's enough room to start discussing interesting things, but there's not so much room that everything trivialises.
Quick precis of the Conjecture:
``If it looks like a sphere, it is''.
Slightly less quick precis of the Conjecture:
Given any topological space X, we can assign a sequence of groups (`homotopy groups') pi_n(X) to it, so that pi_n(X) in some sense describes the n-dimensional structure of X. An n-manifold (a topological space which locally `looks like' ordinary, flat n-dimensional space) which has the same sequence of homotopy groups as the 3-dimensional sphere, is called a `homotopy 3-sphere'. The Poincare Conjecture is that the 3-sphere itself is the only homotopy 3-sphere.
So, to prove it, we have to show that every homotopy 3-sphere is topologically equivalent (`homeomorphic') to the 3-sphere. And to disprove it, we just have to find one counterexample - a homotopy 3-sphere which isn't equivalent to the 3-sphere itself.
Now to use a computer to prove the Conjecture, we need to find some way of verifying that every possible homotopy 3-sphere is equivalent to the 3-sphere. This is theoretically doable - there's an algorithm (the Rego-Rourke algorithm) which lists (with redundancy), all possible homotopy 3-spheres. There's another algorithm (the Rubenstein-Thompson algorithm) which, given a homotopy 3-sphere, can tell if it is equivalent to the 3-sphere. So in theory we just feed the output of the RRA into the RTA.
Except that there are an infinite number of possible homotopy 3-spheres to check, so if the Conjecture is true, this program will never terminate.
Now if you can find some way of reducing the cases under consideration to some finite subset (by, for example, showing that all but a finite number of homotopy 3-spheres obviously satisfy the conjecture) then using a computer suddenly becomes a worthwhile endeavour. This is basically what Appel and Haken did with the Four-Colour Map Theorem.
The computer approach is also useful for searching for counterexamples, and for verifying that all cases up to some level of complexity satisfy the Conjecture.
But where computers aren't currently going to help, is in the actual creative side of things - a lot of important modern mathematics (and the related theorems and proofs) has come from very clever humans saying ``OK, what happens if we try this (utterly weird and counterintuitive) thing here'' and having the aesthetic sense to tell when something's actually interesting or just irrelevant. Until computers are capable of this kind of creative/intuitive/aesthetic/etc behaviour, I doubt they're going to be replacing human research mathematicians any time soon.
nicholas
'Can we trust the darned things?' (Score:4, Funny)
Re:'Can we trust the darned things?' (Score:2, Insightful)
Will this complicate what we can understand? (Score:3, Interesting)
Re:Will this complicate what we can understand? (Score:2)
So do I. They should be banned from school, just like guns. It's one thing to use them in business for quick work. They have no place in school where you're supposed to be learning how to add.
Re:Will this complicate what we can understand? (Score:2)
Re:Will this complicate what we can understand? (Score:2)
Because you're just repeating the formula. The answer is single number, real or otherwise. I need to know the actual area of the circle, not the formula for figuring it out.
Re:Will this complicate what we can understand? (Score:3, Insightful)
Simple. The student should know the value of pi to a certain number of significant figures.
Why? Einstein was once asked how many digits of Pi he could remember. He replied "maybe three or four". The interviewer then exclaimed: "But Sir, some people have memorized hundreds of digits!" Einstein's reply was: "Ah, but I know where to look them up if I need them."
With a calculator, they're just typing in 3*pi and getting an answer with 8+ significant digits. Without a calculator, they're actually calculat
Re:Will this complicate what we can understand? (Score:2)
For some tests, you shouldn't be allowed to use a calculator (knowing basic sine and cosine values, for example, or numerical integration without a calculator). Other
Re: . . . As we know it. (Score:5, Interesting)
On the other hand, many students are prone to using these devices and applications as crutches and try to get away with doing things like using their calculator's implementation of Newton's Method instead of solving the problem themselves.
Some professors have found solutions to this problem, others havent. When I was at college, I think our math department had achieved a pretty good level of harmony with Mathematica - we were expected to do a lot of stuff by hand or in our heads - Gaussian elimination, for example - but in order to make the math seem useful, we were also exptected to be able to solve real-world problems with the stuff we learned. Not contrived "real-world" problems from your high-school textbook, but stuff like interpreting large and dirty scientific datasets where the specific technique we would have to use to solve the problem was something we could figure out, but not something that had been explicitly laid out by the textbook or in lecture. We had to apply the concepts we had learned to figure out the problem, but there was no way we were going to chug out that arithmetic by hand - when was the last time you tried to work on a 16x16 matrix using a pencil and paper? How about a 100x100 one?
I think so, yes. (Score:3, Informative)
Re:I think so, yes. (Score:5, Insightful)
You beat me to it (Score:5, Informative)
Here's an excerpt:
In 1933, E. V. Huntington presented [1,2] the following basis for Boolean algebra:
x + y = y + x. [commutativity]
(x + y) + z = x + (y + z). [associativity]
n(n(x) + y) + n(n(x) + n(y)) = x. [Huntington equation]
Shortly thereafter, Herbert Robbins conjectured that the Huntington equation can be replaced with a simpler one [5]:
n(n(x + y) + n(x + n(y))) = x. [Robbins equation]
Robbins and Huntington could not find a proof. The theorem was proved automatically by EQP [anl.gov], a theorem proving program developed at Argonne National Laboratory.
It was a big help with the 4-color map proof... (Score:5, Informative)
Text of the article (Score:2, Informative)
In Math, Computers Don't Lie. Or Do They?
By KENNETH CHANG
Published: April 6, 2004
leading mathematics journal has finally accepted that one of the longest-standing problems in the field -- the most efficient way to pack oranges -- has been conclusively solved.
That is, if you believe a computer.
The answer is what experts -- and grocers -- have long suspected: stacked as a pyramid. That allows each layer of oranges to sit lower, in the hollows of the layer below, and take up less space than if the oranges
I see no good reason why not.... (Score:5, Interesting)
mod parent up, please (Score:2)
Re:I see no good reason why not.... (Score:2)
Re:I see no good reason why not.... (Score:3, Insightful)
It is amusing to see computers being used as proof bringers when you usually need a proof to program the darn things.
I always loved this picture (Score:2)
no (Score:2, Funny)
q.e.d.
Theorem Provers (Score:4, Informative)
Theorem provers [wikipedia.org] have been around for a long time. A net search should turn up a ton of hits. The key is to implement a system that can be verified by hand, and then build on it.
new facet of an old issue (Score:5, Insightful)
consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time)
Now that means that it's *probably* true, but nobody accepts that as mathematical proof.
On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.
What's the difference between these two cases really? What's the difference between these and relying on computer proofs that are, again, *probably* right?
In this light, the math of the late 19th century and early 20th century was something of a golden age, modern standards of logical rigor were in place, but the big breakthroughs were still using elementary enough techniques that the proofs could be written in only a few pages, and the majority of mathematically literate readers could be expected to follow along. These days proofs run in the hundreds of pages and only a handful of hyper-specialized readers can be expected to understand, much less review them.
Re:new facet of an old issue (Score:3, Interesting)
Wouldn't this be a good use for the compute
Re:new facet of an old issue (Score:5, Insightful)
On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.
What's the difference between these two cases really?
That one claims to be a proof and the other doesn't? You simply can't prove the Riemann Hypothesis by testing trillions of numbers (though if you find one case where it fails, you have disproved it). As a simple example, I can find trillions of numbers whose base-10 expansion is less than a googolplex digits long. Does this mean that all integers have this property? Of course not... So even if all of the calculations are right, you still don't have a proof.
On the other hand, the classification of finite simple groups does claim to be a proof, and if there are no errors, it is a proof. You're right that there are probably errors, but these may be only minor errors that can be fixed. At least no one seems to have found evidence that the proof is completely flawed yet. But it's certainly possible that someone will find an insurmountable error in one of the proofs. There have been cases of propositions that were "proved" true for more than 80 years before a counterexample was found.
What's the difference between these and relying on computer proofs that are, again, *probably* right?
Again, it depends upon what the computer is trying to show. The computer proofs I'm familiar with are ones where the methods are documented, it's just that the computations are too tedious to do by hand. So you can read the proof and say "modulo software bugs, it's a proof". And then it works the same as science: anyone who wants to can repeat the proof for themselves, and see that they get the same answer. As more people validate these results, the likelihood of bugs goes down exponentially, and the likelihood of the proof being accepted increases.
Re:new facet of an old issue (Score:3, Interesting)
The Mobius function is a number theoretic function with the value:
maturity (Score:3, Insightful)
As far as calculator example goes, I believe a person should understand the fundamentals before using the calculator. Don't give a power tool to a kid. Somebody with an understanding of the fundamentals can wield the tool correctly and wisely whereas some cowboy is just dangerous. The old saying comes to mind "know just enough to be dangerous". As far as those oranges in the article, well somebody had better figure out a way to confirm the computer answer, without having to go through the exact same meticulous steps. Don't put your trust in technology without the proof to back it up, because technology built by people is prone to error. Now I'm sure the orange problem won't cause harm to anybody, however I hope I never read such an article about a nuclear power plant!
I don't see what the big deal is (Score:5, Interesting)
"Automated theorem proving" is less an automatic process (like you'd get with an automated production line) and more of a mechanical assistance to the job (like using a fork-lift to move heavy things faster than you could by hand).
It not only takes work to convert a problem into a good representation, but then you have to structure the problem statement in such a way that a theorem prover can make optimal use of it. Often times, you're forced to, upon following the output, prove lemmas (sub-proofs).
Then, when you finally get a proof, you get the joy of trying to simplify it to something that -can- be understood by a person; again, this is part of the process that can't really be automated well.
change the title (Score:5, Insightful)
identity (Score:3, Funny)
Sure, it seems highly probable but.. I just.. I guess I'm a skeptic. All of logic seems like a joke to me, as long as this one little potentially huge loophole looms in the background...
Re:identity (Score:5, Informative)
So - when you come across a theorem that you can't prove but are pretty damned certain is true - you COULD choose to simply make it be an axiom. The problem is that the theorems you are subsequently able to come up with are only as reliable as your initial assumption of that axiom.
If your axioms eventually turn out not to match the real world - then all you have is a pile of more or less useless theorems that don't mean anything for the real world.
It's therefore pretty important to stick with a really basic set of axioms to reduce the risk that an axiom might turn out not to be 'true' for the real world and bring down the entire edifice of mathematics along with it.
If we ever found some sense in which A!=A then every single thing we thought we knew about math would be in doubt.
Re:identity (Score:4, Interesting)
--Stephen
Godel? (Score:2, Informative)
Wait, I know this one... (Score:5, Funny)
making hard things easy... (Score:4, Funny)
Also, chess is just Pattern Matching... I don't know if humans have the edge there or not.
regfree link (Score:3, Informative)
Damn, I should have submitted this (Score:3, Insightful)
Another issue is that you're then excluding any mathematicians who aren't also fairly adept programmers, from really understanding your proof.
All of this said, computers are necessary to do math these days and I think mathematicians should make use of them. I just don't believe we've reached a level of maturity in software development that meets the stringent requirements of mathematical proofs.
Why not.. (Score:3, Interesting)
Well can we trust humans? (Score:2)
The stack might get a bit deep, but... (Score:5, Funny)
The obvious solution is to have the computer create a new proof that shows that the algorithm it used to create the original proof is, in fact correct.
A rather timely Slashdot fortune (Score:2, Funny)
Wrong Summary (Score:2, Informative)
4 color map problem (Score:2)
Re:4 color map problem (Score:3, Funny)
Verification of computer proofs is a pain... (Score:3, Interesting)
The problem with computer generated proofs is that in order to trust the result of the computer, you have to trust:
And of course, our understanding of the hardware depends on how accurate our understanding of the laws of physics is. Any mistake in either the source code, compiler, or hardware, and potentially the proof produced is incorrect. That's an awful amount of stuff you have to check just to make absolutely sure the computer is correct. Then, consider how many bug-free pieces of software you've encountered. ... Yeah, I can see why mathematicians would not trust computer generated proofs.
Of course, people are not infallible either, but that's well known and expected. It's all about how much uncertainty people are willing to accept.
The dawn of a new age of Math... and Science! (Score:2)
The hard part would become formulating the statements. For science, this result would be immeasurable. Ask the computer if it can be done, and get an answer. Knowledge could increase exponentially! (No, really --
Re:The dawn of a new age of Math... and Science! (Score:5, Insightful)
once upon a time, only advanced mathematicians knew calculus, but now we learn it in high school. Just wait until warp theory is an entry level college engineering course
Once upon a time, the majority of adult males knew how to trap a rabbit (or similar creature), gut it, skin it, start up a fire, cook it, and eat it.
I don't.
Heck, I couldn't even look up at the sky at night and tell you which way was north.
Once upon a time, most people could.
All I'm saying is that the amount of knowledge and skills the average human being can possess will not increase expontentially over time (barring artificial manipulation). We gain new skills as a population and lose old ones.
Re:The dawn of a new age of Math... and Science! (Score:3, Insightful)
Insightful up to the last paragraph. Sure I can only learn so much (knowledge and skills). Becoming good at playing the guitar means a trade off, and perhaps by making that choice you don't learn to skin rabbits, fly airplanes, and so on. Yet some have done several of the above. There is enough knowledge that you cannot learn it all. As a population we do not lose old skills, at least not near as often as we learn new ones.
Some things take years to learn, some take minutes to learn and years to mas
a mathematician's perspective (Score:5, Interesting)
there are also several levels of depth for proofs, ranging from "i've found a counter example so i can write the whole thing off as garbage" to "i have exhaustively and rigorously proved this starting with the basic axioms of number theory and worked my way on up"
the latter is really the only acceptable way to prove anything seriously. sometimes when you are reworking an already- done proof to illustrate a point, other mathematicians will allow a bit of latitude when it comes to cutting corners, but for a proof as far-reaching as the one in the article, i would only be interested in a "rigorous" proof, that is, one that started with the foundational tenets of mathematics and combined those to form and prove other postulates, etc. very much a form of abstraction, not unlike large development projects.
the problem arises when one (or several) humans have to be able to objectively check the whole thing. to use my prior example of a large development project, no one developer at microsoft understand the whole of windows. it's too big for a single human to understand. each developer knows what he needs to do to complete his part, and so on and so forth.
traditionally, for proofs, a single mathematician (or a small group) would hammer out the whole proof, so the level of complexity remained at a human-understandable level. (even if tedious) my concern, as a mathematician, with using an automated solution would be the rapidly growing order of complexity needed to properly back up increasingly complex proofs. as stated in the article, it's like trying to proofread a phonebook. (only, you must also consider that for every element of the proof (a particular listing) there are several branching layers of complexity (fundamental, underlying proofs many layers deep) underneath. this gets more complicated in an exponential fashion) obviously this approach will only remain human-checkable for relatively small problems. (programmers: think of some horrible nondeterministic- polynomial problem like the "traveling salesman" problem. systems, like humans, are a finite resource, but if you increase the size of the problem, the complexity will quickly grow far beyond your ability to compute. large proofs suffer from the same difficulties, if not quite as concrete and pronounced as NP algorithms)
in closing, i would have to agree that proofs, no matter the effort and computing time put into them, really should not be viewed as being as rigorous as those provable "by hand" and human- understandable, even if the computer has arrived at a satisfactory conclusion, because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has.
Re:a mathematician's perspective (Score:5, Insightful)
Sure we do -- typical theorem provers spit out a proof that can be checked by hand (god forbid), or else checked by a simple procedure. Understanding that a theorem prover is implemented correctly is tough, but understanding that a checker is implemented correctly isn't. I trust such proofs more than I trust hand-checked proofs, because humans are more susceptible to mistakes than computers are.
We Trust Every[One|Thing] just after we verify (Score:2, Insightful)
Mathematics is Human (Score:2, Troll)
A computer can be a useful tool (I'll be doing computational graph theory this summer), but it is not human. It does not have the ability to hold the possiblities of ideal forms within it and understand. It does not think.
The use
You canno' trust the laws of physics? (Score:2)
Machines don't make mistakes. People make mistakes.
*singing*
You canno' change the laws of physics, laws of physics, laws of physics!
There's klingons on the starboard bow, starboard bow, starboard bow, Jim.
We come in peace. Shoot to kill! Shoot to kill! Shoot
Flyspeck Project (Score:5, Informative)
Absolutely -- and this is the future (Score:3, Interesting)
I can understand not wanting to trust a big piece of C code that purports to check a huge number of cases of a proof doing numerical analysis. It took 6 years of 4 people trying to verify his computer proof of the Kepler conjecture--before they gave up. If a program is that hard to believe, then it does indeed deserve lesser status than a handwritten proof that can be checked by mathematicians in shorter time.
On the other hand, there are other computer tools that we really should trust, and that are revolutionizing mathematics. The idea is simple: write your proofs in such detail that they can be mechanically checked by a simple (easy to verify) procedure. This is much better than paper proofs, because the potential for human error is minimized. (I don't think anyone will argue that published proofs have often been wrong, and the proofs have not been caught by the peer reviewers!) Since it's really, really bad to believe wrong proofs, there's a very real benefit that is offset by the sometimes tedious work necessary in formalization.
That's what Tom Hales is doing with flyspeck, and I think it is the future of mathematics. (In fact, I have recently become addicted to mechanizing my own proofs [tom7.org] in Twelf [twelf.org]--it's not only immensely satisfying, but it helps me sleep better at night and makes for stronger papers.)
They're asking the wrong question (Score:4, Insightful)
It's not an issue of can we trust them, at least not in general. (We won't go into the question of current machines - I'll agree they're generally not there for rigorous proofs.) We're going to have to either trust some form of computation aid in proof work, or throw up our hands and abandon the field - the human brain and lifespan impose definite limits beyond which we cannot go without aid, and since I can't think of any limit human beings have willingly accepted as a group somehow I doubt this will be the first. So, instead, the question should be
"How do we create computers we can trust?"
If that is impossible, then that's it. Mathematics will be come like experimental high energy physics - 20 years effort by 100s of people to achieve one result. But I'm not ready to concede that its impossible. I know it is provable that computers can't solve all problems in general, but the same proof indicates humans can't either. The question I'm curious about is whether the behavior of a computer is too general to be attacked by useful proof methods. Most actions taken with a computer assume a definite action and a definite outcome (spreadsheets and databases, for example, do not do novel calculations but perform the same operations on well defined data.) Mathematical proof is a different question, but the ultimate question is whether a properly designed and built computer (i.e. built as rigorously as possible in a technical and algorithmic sense) would be completely unable to handle problems that are interesting to human beings in the proof field. That is a completely different question from generality statements, and from the standpoint as computers as a trustworthy tool I think it is the more interesting one.
Proving the prover? (Score:3, Interesting)
Perhaps I'm missing something here, but I think the approach of verifying the validity of the proofs that come out of the kind of system described in the article is fundamentally the wrong approach.
Instead, mathematicians ought to focus on formally proving the proof generator. If it could be fomally proved that the proof generator only generated valid proofs, we could automatically trust all the proofs that it generated. Program proof and verification is a complex topic, but it's a quickly maturing area of CS.
Wrong question. (Score:3, Insightful)
Humans behind the scenes (Score:3, Informative)
Computers nowadays can handle symbolic calculations and prove identities and likewise, but for identifying what is interesting to have proved or not, a human may still be there with interpreting that, no matter how sophisticated computers or software can get...
Risc Institute: Theorema (Score:3, Informative)
Charles Babbage and Meta-Logic (Score:3, Insightful)
-- Charles Babbage (1791-1871)
Computers only do what they are told (excepting "hardware failure", which is not the topic).
Shouldn't the validity of computational proofs be able to be determined by proving the meta-logic of the solver?
i.e. proving that a strategy for finding a proof is valid (and therefore trusting its results).
Maybe those wary mathematicians are just unaccustomed to working on a problem meta-logically, and prefer to find proofs directly themselves (with the meta-logic being defined solely within their own minds)?
In such cases, perhaps peer review should not require human verification of a computational proof, but rather another independent meta-logically valid computational proof?
more than people (Score:3, Insightful)
Computers are far better at ferretting out oversights, missing assumptions, and making sure that every t is crossed and i is dotted. If a software system for doing proofs has shown itself to be fairly reliable on a bunch of samples, I'd trust it a lot more than I'd trust any working mathematician to carry out a complex proof correctly.
Re:I don't care (Score:4, Interesting)
When the results become too inaccurate then you have to change your tools, so unless accuracy is key, such as NASA's calculations and the likes then you work on making a computer do as much as possible to eliminate human error and humans can be there to adjust the almost mythic fudge factor
Re:I don't care (Score:2)
while in practice, there is a grey area, the intuition of a good mathematician can still be correct even if he or she makes a few small errors in explaining it rigorously, a formal proof should still aspire to a standard of perfect rigor.
Re:I don't care (Score:3, Insightful)
Another post mentioned that computers could come up with the proofs but it would be too hard for a human to verify. At what point can be start to both trust the output and to build upon that trusted output by further trusting future output. It could be an interesting time in which we don't understand how things are done but we are able to do them none the less. It
Re:Are Computers Ready to Create Mathematical Proo (Score:3, Informative)
So many times I see people use programs like MAPLE to show something mathematical, and it ends up a disaster.
Problems is the brain on the chair, not brain on machine.