The End of Mathematical Proofs by Humans? 549
vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
Here's a good theorem prover (Score:5, Informative)
Re:Godel/Turing/Cohen... (Score:2, Informative)
Why can't human brain functions be copied and improved upon in circuitry, other than the currently prohibitive costs involved, and our failure to accurately determine the exact neuron mapping of anyone's brain. Even the dynamic neuron connections could be potentially emulated or mimiced by a computer.
Nothing too new? (Score:2, Informative)
IIRC - back in the early days of AI (1960s), some people in the field thought that in relatively short order computers would be a major soucre of such mathematical proofs. It hasn't happened yet, but that doesn't mean that it won't eventually.
Re:Computer _aided_ proofs (Score:2, Informative)
Re:Consider the source (Score:2, Informative)
A proof needs to be intuitive (Score:4, Informative)
I doubt the human proof system will go away completely - even if we can check nasty theorem proofs using computers, we still need humans to sit and explain what they mean.
Re:Seems simpler to prove proffs-by-computer (Score:5, Informative)
Mizar (Score:2, Informative)
Re:Seems the computer is wrong (Score:1, Informative)
If you aren't talking about maps anymore, and are willing to discus arbitrary shapes, adding holes to the shape can mean it will need more colors.
In either of these cases, you can create situtations where arbitrarily many colors are needed. However, the simple case of contiguous shapes on a continuous surface requires no more than 4 colors even though more are frequently used.
dom
Re:Theorem Proving (Score:2, Informative)
I must, however, admit to a certain level of interest in the topic, so, if you want to have a more detailed conversation on the topic, I'd be glad to have a chit-chat sometime.
Feel free to post your proof that P = NP, or a paper referencing a linear time FOL theorem prover or SAT solver
http://en.wikipedia.org/wiki/Automated_theorem_pr
Re:Godel/Turing/Cohen... (Score:1, Informative)
I've been reading so much of Penrose and here is my 2-cent about his views...
Turing's theorem, in brief (neglecting sound-ness of algorithms), dictates that there is no single algorithm that can "prove" the non-termination of all algorithms.
The above (combined with Godel's theorem) means that, no single program can prove all theorems , which i think is correct.
So, you have to use different algorithms for certain problems. But in order to "devise" those new algorithms, you need to solve the (Turing's) problem of termination on those algorithms (indeed , programers-mathematicians do manage to solve it!)
So, in essence Penrose prooved that, humans are SOMEHOW more capable than computers, since they can (in principal) solve the termination problem of all algorithms.
Now, Penrose also believes that this capability may originate from the access of humans mind in pure truth (or what ever) but those are his beliefs, not his mathamatical proofs. You are not obliged to follow them.
relax (Score:2, Informative)
A fairly basic example: several centuries passed between Newton and Leibniz introducing derivatives and Cauchy and Weierstraß arriving at the formulation that's taught in high schools nowadays (at least where I live): epsilon-delta definition of the limit and going from there. Today, this stuff and the accompanying proofs look pretty simple (and they are), but without the right formalism/mental model, playing with infinitesimal quantities is somewhat of a black art. It took a long time to get there, and computers wouldn't have helped much.
Godel? (Score:2, Informative)
Russel and Whitehead's work in Principia Mathematica, amplified by the Hilbert Programme.
Godel exploded that little lot by showing how
any formal system powerful enough to represent
anything worthwhile was powerful enough to
contain contradictions.
Even simple `proof' like n-satisfiability or
showing that a string is a member of a language
defined by a context-free grammar is NP complete.
ian
Re:Critics Reaction... (Score:5, Informative)
So the question is - how are we going to prove/disprove a computer program that proves a theorem? Program complexity has meant that all but the most trivial programs cannot be 'proven'.
The solution, it seems to me, is per the article: get the s/w to output a series of steps using formal logic. Any series of formal logic steps should be confirmable by a 'formal logic validator', and that is the only program we need to prove correct. That will be non-trivial, but will then open the floodgates to any hacked up piece of code to be used to generate provable logic.
Re:BTW, as a geek I want to know (Score:5, Informative)
Re:Critics Reaction... (Score:5, Informative)
Basically, a proof is, according to the axiomatic method, is just a non-infinite sequence of formulas, which can be created by the allowed derivation rules. The whole point is that a proof for A HUMAN, and mathematical proof is different. The axiomatic system is not perfect, either. The whole Hilbert [wikipedia.org]-plan is proven to be impossible to be done, thus it is not possible to prove that there are no contradictions in the axiomatic system
I think the "MUI" axiom system is commonly used to demonstrate how it works, basically. It is too lengthy, and i'm lazy.
Re:Critics Reaction... (Score:5, Informative)
Fully formal mathematical proofs depend on nothing but ability to distinguish characters, to compare text strings, and to perform substring substitution.
To your example (2 + 2 = 4). In formal arithmetics, based on Peano axioms, there is one primary operator, let's call it s:N -> N, and s(n) is interpreted as (n + 1). "2" is _defined_ as s(1). 3 is defined as s(2), and 4 is defined as s(3). So one has to prove that s(1) + s(1) = s(s(s(1))).
By definition of addition (remember, addition is not fundamental notion in the formal arithmetics, it's defined in terms of s-operator), a + s(b) = s(a + b), and a + 1 = s(a), so we have
s(s(1) + 1) = s(s(s(1))),
s(s(s(1))) = s(s(s(1)))
Q.E.D.
So, where proof above depends on anything but mechanically verifiable string manipulations?
P.S., of course mathematical formulae are not strings, but rather trees, but this doesn't change a bit.
Penrose (Score:1, Informative)
Re:The best math is always elegant. (Score:1, Informative)
Re:Science by AI (Score:5, Informative)
It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false (see Incompleteness theorem [wikipedia.org] for details).
For example, it's impossible to find answer to CH [wikipedia.org] (continuum hypotesis) in ZFC (Zermelo-Fraenkel + Choice axiomatics).
In short: some problems can't be solved in existing theories, they require creating a new theories with new axioms. It's non-formalizable process (it's also proven), so no computer can do this.
Re:The best math is always elegant. (Score:1, Informative)
the works we have from fermat are ones that he never intended to be published
imo the most likely explanation is that he found a false proof and later realised it was false but for some reason did not destroy that one note.
Re:Critics Reaction... (Score:4, Informative)
Re:The best math is always elegant. (Score:3, Informative)
Unfortunately, as Hardy pointed out, that proof assumed that all Quadratic extensions of the rational numbers are Unique Factorisation domains - which isn't true
It seems very likely that Fermat's proof was probably of a very similar sort.
Re:The best math is always elegant. (Score:5, Informative)
Mathematicians think they know what Fermat thought was the "proof" that he could not fit in the margin, since Fermat used a similar strategy for another problem. Euler was the one who used Fermat's strategy on Fermat's last theorem explicitly, and showed that it did not give a full proof as Fermat had hoped. It might be that Fermat himself tried and then gave up, or that he was happy to have "solved it" and looked for other things to prove.
I think you (and most people) misunderstand the reason Fermat's last theorem has such a central place in math history. But first lets discuss the reason why the problem became so well known; it is because it is such an easy problem to state and to understand, still no one has been able to use "simple" math to prove it. Even Fermat himself thought the problem should be fairly straigth forward to solve, and it has made generations of people with curiosity for math look for proofs and even thinking they have found one. It is also a problem some of the greatest minds of math did not manage to solve. Fermat, Euler, Lagrange, Gauss, Abel, Riemann, etc have all had a try and did not solve it. Which math wanna-be wouldn't want to solve something this group of people did not manage?
Now, even though this has made Fermat's problem something that has created a lot of publicity, the number one reason Fermat's problem has been important, is because of all the beautiful maths that have been discovered by mathematicians trying to solve it. Fermat's theorem in itself is not interesting. It is not like the Riemann hypothesis, which if proven to be false, will make much of modern maths not true or at least force mathematicians to look for new proofs. This is because you can prove much interesting stuff if you assume the Riemann's hypothesis is true, problem is of course, this is not yet proven. If Fermat's theorem was been shown to be not true, that would have been suprising, but would not made large parts of maths collapse.
So, the modern proof of Fermat's theorem is the end of a long journey which has lead to some very deep mathematical knowledge, and in a way, Wiles' proof is much more interesting in its own right than that it also proofs that Fermat was right in his guess. A "simple" proof (watch out when mathematicians use the word simple or trivial) of Fermat's problem would give undeniable bragging rights, since you could say you solved a problem Gauss couldn't solve with the maths Gauss knew. But again, it probably would be more of a huge accomplishment for one person than a huge breakthrough in maths.
A last comment; the reason Wiles' proof is long is not because math is verbose, far from it . It is because Wiles is able to connect what would seem to be two unconnected branches of mathematics, showing that problems in one of the branches can be restated as problems in the other. This is not something you do in a few pages. And the importance of it becomse clear, if you consider that what can be an unsolveable problem in the one branch of maths might be reformulated as a solveable problem in the other. Math is always about trying to find ways to solve something as simply as possible. Not something computers is very good at, so no Abel prize to Big Blue for a while I think.
Re:Science by AI (Score:3, Informative)
Not happening yet (Score:2, Informative)
Capability vs reality... (Score:3, Informative)
Kjella
Re:Critics Reaction... (Score:5, Informative)
If you can't independently examine and verify your "proof" then how can it be considered proof of anything?
That's easy. Speaking as a PhD mathematician, there's nothing disturbing at all about these computer proofs. They're examples in which a computer was programmed to generate a perfectly standard proof, except that it's extremely long.
Checking the proof is not hard: it suffices to verify that the program emits only correct inferences. That's nothing more than a standard human-generated proof. In addition, a verifier can be coded by someone other than the original author, to check the validity of the inferences generated by the first program. The checker's algorithm can also be verified using a standard human proof, and would be used to confirm that a bug didn't result in an incorrect proof.
Note that Gödel's incompleteness theorem has nothing to do with these programs: they don't generate all possible proofs. They only generate one specific type of proof per program. Each program is easy to verify.
You could call the software correctness proofs "meta-proofs", but that's just being coy. They're perfectly legitimate proofs, and they are sufficient to prove the correctness of proofs generated by the program.
Belief in a proof (Score:2, Informative)
However it is common to use deduction rules incorrectly, theorems without meeting all preconditions, etc. When a proof becomes complicated, doubts arise.
But that's exactly why we can have much more faith in computer proofs. A computer can only use very specific proof techniques, written formally in some kind of logic. All theorems must also be given very precisely. And the proof can be also checked mechanically, possibly using different software to avoid bugs etc. Size is irrelevent, as long as the basic proof technique is rock solid.
On the other hand machines are stupid, you need humans to program them and prove the correctness of every proof technique they use.
PS. Reading this article was very interesting, given that I'm currently attending a computer science conference (ETAPS) one of the main topics of which is logic and computer proofs
Classification of Finite Simple Groups (Score:1, Informative)
http://encyclopedia.lockergnome.com/s/b/Monster_g
Were such a classification attempted by computers, it would sweel in size geometrically (the "proof" as it stands is a body of work numbering roughly 10,000 pages of advanced, dense mathematics. People say that roughly four people in the world understand the proof fully). If it were done by computer, probably no one would understand it. Plus, as was demonstrated with the Monster group, computations to verify some of the assumptions would be prohibitively expensive. Possibly on the order of breaking some strongish crypto.
Re:Science by AI (Score:3, Informative)
Re:BTW, as a geek I want to know (Score:3, Informative)
Re:Critics Reaction... (Score:1, Informative)
The HOL group treats proofs as a data type. New instances of this type can only be constructed using a small set of inference rules. Therefore, if the system creates a proof, then you know that it has been created only by using these inference rules. Hence, trusting the inference rules is sufficient to trust whatever proofs are offered.
A second way is the use of "reflection". That is, suppose that you had a "tautology checker" program which, given some proposition, returns true only if that statement is a mathematical tautology. And, suppose that you prove this program works correctly. Then, you can always trust the results of this program, and so any time it says that something is a tautology, you can believe it becuase of the proof you have done.
There is an excellent paper by John Harrison that discusses these issues, which you can read here:
http://www.cl.cam.ac.uk/users/jrh/papers/reflect.
Re:BTW, as a geek I want to know (Score:1, Informative)
(defun rev (a &optional res)
(if (null a)
res
(rev (rest a) (cons (first a) res)))
This reminds me of when someone argued that the Haskell example of Quicksort couldn't be done as elegantly in Lisp. Turns out it's trivial to write it *exactly* the same way in Lisp as in Haskell, and it's shorter to boot.
Re:Critics Reaction... (Score:2, Informative)
= is an equivalence relation which is, by definition, reflexive.
Re:Science by AI (Score:3, Informative)
The continuum hypothesis, due to Cantor, has been shown, partly by Godel, to be consistent with the usual axioms, but not proveable. The continuum hypothesis is that the power set of the integers is equal to the number of points on a line. Godel spend his last years searching for the axiom to add to the standard axioms, such that the continuum hypothesis would be proveable.
Godel was a Platoist and considered intuition to be necessary to mathematics, and his theorem is generally considered to prove this. One should be cautious in that intutition is a technical terms, and is a generalization of Kantian intuition.
Here is my ideosyncric take. Timaeous by Plato lays down the claim generally called the hypothesis of the higher hypothesis. In case of contradiction, there is an axiom that can be added to resolve the contradiction. Godel says in effect, it is good there is such an axiom, since you have to add it. And the process of adding it involves creativity. Take this with a grain of salt, but Timaeous was a favorite of Godel.
Regarding the limits of Godel's proof, it does not necessarily apply to infinitely large axiom sets. So perhaps God has an infinite axiom set, in contradiction to the usual claim that God does not use axioms.
Re:Science by AI (Score:4, Informative)
The theorem says that there are either true unprovable things or things that are both provable and provable to be false. An interesting formal system is either incomplete or contradictory (it can be both, but it doesn't have to).
Re:Science by AI (Score:2, Informative)
Re:Here's a good theorem prover (Score:1, Informative)
Re:My Proof To The Four Color Theorem (Score:2, Informative)
Re:Science by AI (Score:2, Informative)
The brain is *not* a serial computer, and cannot be compared to one in any meaningful way.
1) Parallel computations cannot necessarily be simulated on a serial computer. (http://www.andrewboucher.com/papers/parallel.htm [andrewboucher.com]
2) Why do you assume that a neuron can be simulated with a multiply-add operation? Even if the perceptron model of the neuron, where a scalar output is the linear combination of scalar inputs, had anything to do with the brain (which it almost certainly doesn't), you'd need a variable-length addition (between 1 and 100,000 inputs). Furthermore, to make the perceptron useful, you need some form of non-linear transformation on the output.
3) Many of the 1e11 cells you're including in your calculation are used for specific, non-conscious tasks.
4) What about the 1e13 neuroglia, whose function is still poorly understood, but have been shown to have an active role in the brain's activity? (reacting to and initiating neural activity)
5) 1e2 "operations per seconds" (firing rate?). Action potentials are not operations performed on data, they are the data.
Ugh, I could go on for a while...
But please, please....the brain != serial computer. The brain is not comparable to a serial computer. And they can be compared, no one today knows enough to do it. So stop.