Forgot your password?
Math Science

The End of Mathematical Proofs by Humans? 549

Posted by timothy
from the somehow-I-doubt-it dept.
vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
This discussion has been archived. No new comments can be posted.

The End of Mathematical Proofs by Humans?

Comments Filter:
  • by Pants75 (708191) on Wednesday April 06, 2005 @04:41AM (#12151988)
    That's what my math teacher always said, back in school. I'm sure he would extend that statement to include proofs, given that they are also just math.

    What about Fermats last theorem? Fermat wrote in the margin of his note book that he had a proof, but it was too large to fit there, so he'll write it on the next page. Trouble was, the next page was missing from the book.

    The modern proof for FLT took hundreds of pages of dense math and went through some math concepts that AFAIK hadn't even been invented in Fermats time.

    What was Fermats proof (if it existed)? It would surely have been far more elegant than the modern version.

    That doesn't make the modern version wrong, just less pure, I feel.

    The problem with modern computer aided proofs is they allow the proof to become unwieldy and overly verbose, compared to what it would have to be if just a human produced it.

    Such is progress I guess.

  • Re:Science by AI (Score:5, Interesting)

    by FleaPlus (6935) on Wednesday April 06, 2005 @04:45AM (#12152004) Journal
    This reminds me of a Nature paper from last year:

    Functional genomic hypothesis generation and experimentation by a robot scientist []

    The question of whether it is possible to automate the scientific process is of both great theoretical interest and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments. We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection.

    New Scientist also had an article on it: "Robot scientist outperforms humans in lab." []
  • Theorem Proving (Score:2, Interesting)

    by NitsujTPU (19263) on Wednesday April 06, 2005 @04:50AM (#12152025)
    Theorem proving is NP Complete.

    FOL Theorem provers jump through a number of hoops to make the whole bit a little more practical, but realistically speaking, having a computer that just runs through mathematical proofs in the manner that a human does is a long way off.

    An interesting thing about the article is that the first proof was done with an FOL prover... it was a long, non-intuitive proof, but an FOL prover has performed that proof.

    The second was done with code, a human had to write the program. There is no computer replacing the activity of a human in performing this action that I can see here. It was merely code to brute attack the problem from what I can see, but I didn't read the guys article (from what I can see, it hasn't reached publication yet anyway).
  • Elegant proofs (Score:2, Interesting)

    by erikkemperman (252014) on Wednesday April 06, 2005 @04:50AM (#12152027)
    I'm certainly not a very gifted mathematician, but I'd like to think my grasp on it is at least adequate to make this point: from the proofs I've seen and understood, it seems that while the shorter and more elegant proofs may not strictly be more "truthful" than complicated ugly ones, the important thing is they are so much easier to understand, verify, and explain to others. For instance, I remember being especially charmed by Cantor's diagonal argument and will not forget that particular method of reasoning for the rest of my life.

    But maybe there's a point where simple proofs just won't do it; if there were a 4 line proof for Fermat's last conjecture I'm sure someone would have found it before Andrew Wiles proved the thing in something like 80 pages.. In such cases, computer-aided reasoning is really the only way to go, I'd say. It's probably easier to verify a proofchecker than a proof.
  • unverifiable (Score:3, Interesting)

    by StrawberryFrog (67065) on Wednesday April 06, 2005 @04:54AM (#12152038) Homepage Journal
    However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

    Nonsense. If people can't verify proofs, then what you need to do is to verify them by computer :)

    I'm not entirely joking here - so long as the verification program is written independantly of the thoerem-proving program, and it can reliably distinguish between valid and invalid logical inferences, what's the problem? It's simple automated testing methods really.
  • by Zork the Almighty (599344) on Wednesday April 06, 2005 @05:05AM (#12152072) Journal
    I believe that Feigenbaum's constant was discovered with the aid of a programmable pocket calculator. He noticed something, and used the computer to check it out. I think that of all the things that computers can do for mathematicians, this is the most valuable. You can ask "what about this" and the computer will do the grunt work. It's made a grad student's life much easier I'm sure :)
  • by orzetto (545509) on Wednesday April 06, 2005 @05:23AM (#12152133)

    The Economist is the same magazine that supported Bjørn Lomborg [], thereby proving their utter incompetence in environmental science. They defended him in spite of clear and very detailed indications [] from the scientific environment that he was nuts.

    I suppose they will know economics when they talk about it, but they demonstrated an inappropriate habit of pontificating on things they don't have a clue about. I for one think they burned a lot of karma.

  • by arodland (127775) on Wednesday April 06, 2005 @05:44AM (#12152193)
    You mean... thinking?

    Geometry wasn't the most useful of my high school math classes, but it was the most fun, because we all worked together to come up with proofs, showing that those lousy formulas we had to memorize all had their root in Euclid.

    Similarly, anyone who wants to go anywhere with calculus should at least try to understand the proof of the existence of limits, at least for a few minutes, because that's what makes calculus something other than voodoo.
  • by RedWizzard (192002) on Wednesday April 06, 2005 @05:46AM (#12152197)
    You linked to a "by invitation" editorial. That doesn't necessarily represent the normal views of the magazine. Have you got any further evidence they generally support Lomborg's views?
  • by ceeam (39911) on Wednesday April 06, 2005 @05:48AM (#12152207)
    What do they use for this field? Prolog still or something new(er) has been invented? Or they do what businesses do and use whatever comes handy or whatever is the current fad?
  • by darkov (261309) on Wednesday April 06, 2005 @06:14AM (#12152286)
    I agree with (1), I believe Godel had a hand in that one.

    With (2), the program can reduce the tedium of proving the original proof in some cases. That's what computers are good at and are better at than humans. Proving the program may well be much easier. I would think that's why the researchers in the article used computers in the first place. If you program in C++ you will have a problem, but a functional or logic language program is straight-forward to prove (PROLOG programs are essentially the execution of a proof).

    With (3) you could show by running it on different hardware and software that the platforms did not affect the result by a huge probability. If you don't like the 'probability' bit, who says there isn't a human trait or gene that causes any human to get a proof wrong? Humans are imperfect too, but if enough of them agree, and they are qualified, then we agree that what they agree on is true. This is the same situation as the potentially flawed platforms problem.
  • by Dulimano (686806) on Wednesday April 06, 2005 @06:17AM (#12152297)
    The problem with human-made proofs is that there can be gaps and even faults in them. Proofs with hundreds of pages are full of sentences starting with "It's easy to see that X.". Most often X is indeed obviously true, sometimes it is not.

    The goal of computer-aided theorem proving is to eliminate all these gaps. The human prover sketches her proof for the computer in a formal language. If some steps are missing, the computer calls her attention to these. When the long interactive process is finished, we have a proof that does not have any ""It's easy to see"s and "Obviously"s in it. At this point, another human can check the proof. But there is really no good reason to believe that this other human is more adept at checking it than the computer itself.

    The proof of the Four Color Theorem was recently formalized by Georges Gonthier in this way.

    From a philosophical point of view, many arguments can be made. But from a practical point of view: If I have to choose between believing a 300 pages long proof written in an informal language (Fermat's Last Theorem by Wiles) and a 1000 pages long proof written in a formal language, verified by a computer, I will choose the second one anytime.
  • Embrace both methods (Score:2, Interesting)

    by karvind (833059) <karvind@gm a i l . c om> on Wednesday April 06, 2005 @06:25AM (#12152315) Journal
    My take is that computers are a liberation from mindless computation and a tool for exploration. Only the most severe of Luddites wouldn't use a calculator, a computer is just an extension of that. For example, factorising polynomials, solving equations symbolically, and so forth. These are the sort of things that computers can save us an enormous amount of grunt work doing. Why not use them?

    For exploration, they are also useful. New series for p have been conjectured (and proved by hand) based on computer programs which search for certain sorts of integer relations. This has led to the ability to find the nth hexadecimal digit of p without finding the previous n-1 digits, something that could have been exceedingly difficult without this computer program.

    With regard to things like the proof of the 4 colour theorem using computers, there is a more interesting debate here. I'm going to make an analogy with a similar debate in neuroscience which is quite interesting. David Marr, who was a researcher into visual systems in the brain, made a distinction between Type 1 and Type 2 theories. A Type 1 theory is, broadly, one in which there are nice formulas like you get in physics or maths. A Type 2 theory is essentially computational. For example, many models of the way the brain work use PDP (parallel distributed processing, a sort of computer model of neurons in the brain). Marr's point was that in general we should prefer Type 1 theories, because they give us a general and intuitive understanding of how the thing being studied works. For example, there are certain cells in the visual system which detect edges in the visual field. So, if you have a theory of vision which involves certain cells finding the edges in the visual field then that gives you a better understanding than some sort of computation model which performs the function of the visual system but which you don't really understand. However, he also pointed out that since animals evolved and were not designed, there may not always be a Type 1 theory explaining why, in an abstract general way, the brain (or any other bit of the body) worked, it just does.

    Returning to maths, I think the analogy is that a Type 1 theory is one which can be completely understood by a single individual by working through it line by line, whereas a Type 2 theory might involve a computer proof like in the 4 colour theorem. In general, we prefer to get proofs which don't use computers, because they might give us an intuitive insight into why the theorem is true, or it might lead to an interesting new branch of maths (like the Taniyama-Shimura conjecture that Wiles proved). However, there might be some theorems which just do not have proofs which a single person could read and fully understand. The 4 colour theorem might be just such an example. It would be foolish to discount any such theorems a priori, and so I would advocate a cautious use of computers in mathematical proof. The danger in overusing computers is that if they become too widely used, people will search for computer proofs without finding the insightful way of solving the problem which doesn't use a computer. The danger in not using them enough is that huge amounts of effort could be wasted trying to find neat analytical solutions to problems which are not really amenable to traditional methods (not using a computer) of attack.

  • I agree with Erdos (Score:2, Interesting)

    by yderf (764618) on Wednesday April 06, 2005 @07:07AM (#12152430)
    Proofs where no one understands the proof are of little use. Yes they allow you to work with the statement that was proven, but you don't garner anything from the proof itself.

    Many nice and elegant proofs are just that because they are not only aesthetically pleasing, they also give their readers insight into other problems.

    As Erdos used to say, they come straight from the SF's book. (if you don't know what that means, go read about him)
  • Re:Science by AI (Score:3, Interesting)

    by mangu (126918) on Wednesday April 06, 2005 @07:28AM (#12152511)
    it has taken the work of countless researchers, authors, and teachers to distill and refine it.

    What you and others fail to grasp is that computers are evolving rapidly, human brains aren't. Our current computers are still far from having the data processing capability of a human brain.

    In rough orders of magnitude, a human brain has 1e11 neurons, with 1e3 synapses each, doing 1e2 operations per second. Considering that a neuron can be emulated by a multiply-add operation, we would need 1e16 such operations per second to emulate a human brain.

    A 3 GHz Pentium can do 1e10 floating point multiply-add operations per second, so a human brain is roughly equivalent to one million desktop computers. Therefore, Moore's law tells us that we still need 30 years of progress before we have a human-equivalent computer, but in 60 years a desktop computer will have the data processing power of a million human brains.

    We have absolutely no way to predict the consequences of this. But I'm sure that, unless we connect our brains directly to computers, we will be left hopelessly behind.

  • by dissy (172727) on Wednesday April 06, 2005 @07:34AM (#12152529)
    > By programming my computer to independently examine and verify the proof. Done
    > properly, the instructions for a computer to verify a proof can be a lot simpler
    > than verifying the proof itself.

    But even multiple computers performing a verify isn't _truly_ a verification.

    After all, how long did the Pentium division bug go _unnoticed_???

    Looks like the chip was released on March 22, 1993 []
    and the bug was reported on October 30, 1994 []

    Over a year and a half worth of time any/all such verifications obtained with the newest intel computers at the time were WRONG.

    And any guesses how they even found this bug??
    It was a human, not another buggy computer, that had to verify the data.

    Yes computers can do things faster, but ever underestimate the power of truly knowing what your doing, which so far, a computer can't grasp at all, let alone do as well as the human mind.
  • by Sique (173459) on Wednesday April 06, 2005 @07:51AM (#12152594) Homepage
    It's not that easy. Fundamentally the mathematical proof consists in something else than the actual running of the program.

    The question is if the output of the program is equivalent to the problem you want to prove. Your proof actually consists of verifying that "Program Output is A" is only true if "Statement B is true" is true.

    (It's not necessary to prove also the reverse notion. "Statement B is true" doesn't need to implicate that "Program Output is A". Imagine a program that prints out "A" if it is 8pm. Then imagine the statement being "The long hand of the clock points to the 12". If the program prints out "A", then you can be sure that statement to be true for a correctly running clock.)

    In the end those proofs (Program Output is A -> Statement B is true) are as peer reviewable as any other proof.
  • I love proofs (Score:3, Interesting)

    by n-baxley (103975) <nate AT baxleys DOT org> on Wednesday April 06, 2005 @08:04AM (#12152634) Homepage Journal
    I loved doing proofs in my High School math classes. I even had my own book of axioms, thereby cementing my destiny to be reading /. someday. I didn't become a mathematician, but the merger of logic and creativity was so appealing. I wish I had been able to take it further, but unfortunatly calculus came along and whipped my but, so instead of an engineer, I became a programmer. Go figure.
  • Issac Azimov story (Score:5, Interesting)

    by Conspiracy_Of_Doves (236787) on Wednesday April 06, 2005 @08:20AM (#12152715)
    I recall a story I once read by Issac Azimov about a future culture where all knowledge of mathematics has been lost to humans, who have to rely on computers and calulators to do even the simplest math problems (older computers make the new computers and humans are left completly out of the process).

    A janitor at a science lab rediscovers the 'ancient knowledge' on his own. The military quickly gets ahold of it and immediatly puts it to use in weapons research, whereapon the janitor promptly takes his own life in shame.

    Anyone think there might be a future where humans rely on computers so much that they don't bother learning math at all any more?
  • Re:Science by AI (Score:2, Interesting)

    by plupster (689587) on Wednesday April 06, 2005 @08:26AM (#12152742)

    Never ever is a long time... To quote from the excellent book (I just started reading) Artificial Intelligence: A modern approach []

    Even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations. It is all too easy to show rigorously that a formal system cannot do
    X, and then claim that humans can do X using their own informal method, without giving evidence for this claim.
  • Re:Science by AI (Score:3, Interesting)

    by Sara Chan (138144) on Wednesday April 06, 2005 @08:32AM (#12152767)
    In rough orders of magnitude, a human brain has 1e11 neurons, with 1e3 synapses each, doing 1e2 operations per second. Considering that a neuron can be emulated by a multiply-add operation, we would need 1e16 such operations per second to emulate a human brain.

    A 3 GHz Pentium can do 1e10 floating point multiply-add operations per second, so a human brain is roughly equivalent to one million desktop computers. Therefore, Moore's law tells us that we still need 30 years of progress before we have a human-equivalent computer, but in 60 years a desktop computer will have the data processing power of a million human brains.

    If this is right, then the 1e5 networked computers that are currently used by Google are a tenth of the way there. And 1e4 networked computers (available at some other institutions) are 1% of the way there.

    So, if you're right (and I'm not saying you are or aren't), and we just need to work on software to take advantage of the hardware, then it is really pretty scary.

  • Re:Science by AI (Score:4, Interesting)

    by Slime-dogg (120473) on Wednesday April 06, 2005 @08:51AM (#12152898) Journal

    Who's to say that neurons operate in the same way as a computer's multiple-add operations? Another little problem is that you'll need additional programming to tell the computer how to emulate the communication and interaction between neurons. I imagine that this would take far more processing power than we could ever achieve.

    We may be able to emulate the parts, but you can't just throw the parts together in a heap and expect it to work. The sum of the parts is far more complicated than the parts themselves.

  • by skubeedooo (826094) on Wednesday April 06, 2005 @08:54AM (#12152910)
    I'm a dirty mathematician that doesn't deal in proofs (aka theoretical physicist), although I went to a seminar given by an eminent prof. in maths on the nature of proof.

    The talk was actually about why it doesn't have to be the case that really large numbers must satisfy the same rules as smaller computable numbers. This is based on Godel's proof that says something about it being impossible to know whether an axiomatic system is free of contradictions.

    Anyway, he also touched on computer proofs. He was basically saying that (most) mathematicians do what they do because they enjoy creating and understanding logical structures. To some extent you neither create nor understand a computer proof and so it is intrinsically less interesting to a mathematician. Since the world of mathematics is driven mainly by what they find interesting, computer proofs haven't featured very highly in it.

  • by maxwell demon (590494) on Wednesday April 06, 2005 @09:33AM (#12153165) Journal
    While you don't actually proof the human (you can't, of course), you do generally take steps to ensure that you don't rely on humans with faulty logic.

    For example, if some random student would come and say "I looked at the proof, and I think it is OK", then you wouldn't give it as much weight as if some mathematician who clearly has proven his profound use of logic in several papers goes through the same proof and says "everything is OK." And for at least some time, the computer program would be more like the random student, unless you can proof it works OK (of course this proof may be wrong as well, but it's another line of defense; 100% certainty is unfortunately not possible, even in math).

    Of course the situation is different if the program does find an error in the proof. Then you have a very narrow point to look at, and checking if the program is right in saying that step is wrong is probably not too difficult. Unfortunately this means bugs which cause wrong proofs to be judged as right will be much harder to detect than bugs which cause correct proofs to be judged wrong.

    Note that again this parallels the random student situation: If the random student comes and says "there's an error in the proof, you cannot follow blah from blubb", then checking if the student is right should be relatively easy (just look at the proof and see if that particular step is really wrong).
  • by Doug Dante (22218) on Wednesday April 06, 2005 @10:05AM (#12153466)
    This is in regular human talk. It's not a true proof.

    Imagine that you have a map that has five colors where every color is touching every other color.

    It's on a piece of paper beneath a coffee can.

    One color must be on the edge. There may be only one color, or there may be many, but at least one will exist.

    Move the coffee can until you see that color. Lets say red.

    Now imagine taking a marker and covering the rest of the page in red. (Alternatively, stretching the red area around the rest of the page).

    Now you have four colors who are all touching one another, and who are all touching the outer red page.

    Imagine that you have a tetrahedron. This is a 3D shape which has four sides. Imagine that each side of the tetrahedron has a color. Say, green, white, blue, and yellow.

    Imagine that you poke a hole in the tetrahedron and spread it out flat - placing it on the red piece of paper.

    No matter where you poke the hole, all four sides will touch one another.

    However, you must poke a hole so that all four colors are on the border, i.e. so that they all touch the red paper.

    Look a the green side. All of the sides are equivalent.

    If you poke a hole in one of the corners, the opposite side will be in the center when you lay it down, and it won't touch the red. It won't work.

    If you poke a hole on one of the edges, it's even worse. Two of the opposite sides will be in the center, and neither will touch the red. Again, it won't work.

    Worst of all, if you poke a hole directly in the center of the green side. Then all three other sides will be in the center when you lay down the tetrahedron and none of them will touch the red. That won't work either.

    Therefore, there's no way to poke a hole such that all four sides will touch the red when the tetrahedron is laid down, so it was impossible to have a map with 5 colors, all of which were touching one another, in the first place.

    Came up with this in 1990. Never formalized it mathematically. It may not be a proof, but it makes logical sense to me.

  • by Anonymous Coward on Wednesday April 06, 2005 @10:49AM (#12154018)
    You _may_ have the hardware for your great AI machine in 30 years, but I wouldn't count on the software being delivered on time. Also the human brain is one highly efficient computer: roject.html

    On top of that I think we will see add on modules for the brain (ie. bionics) before AI.

    "The difference between machines and human beings is that human beings can be reproduced by unskilled labor." - Arthur C. Clark
  • by PDAllen (709106) on Wednesday April 06, 2005 @11:01AM (#12154161)
    It is very easy to produce a validator which correctly classifies valid and invalid proofs written in formal logic. It is very easy to prove that the validator does what it is meant to do.

    We don't know whether this abstract formal logic thing actually has any resemblance to the real world - that is not Goedel's theorem, which assumes logic to be valid and discusses potentially interesting frameworks for mathematics. Whether formal logic 'works' is simply something you have to believe, you cannot argue it, for the obvious reason. Whether you can use formal logic to demonstrate that your mathematical axioms are consistent is not obvious; Goedel's theorem states that in fact whenever your mathematical axioms are strong enough to produce interesting mathematics, you cannot demonstrate consistency.

    The reason why mathematicians do not like computer proofs (as much as normal proofs, anyway) and do not like computer validation (always) are:

    Computer proofs tend not to give any insight into why a statement is true. They simply check vast numbers of cases, when there might exist a simple argument which gives understanding. However, there is usually interesting mathematics involved in reducing the original problem to something which can be computer checked.

    It is not easy to produce a program which will verify a proof written by a normal person, in the same way that it is hard to write a program which can spot contradictions in any argument written in natural English.

    An argument written out in formal logic, although easy to verify for a computer, is generally much, much longer than the same argument written by a normal person, and even if you wanted to read the telephone-directory sized proof, the interesting ideas are so dispersed that you would gain no understanding from the proof.

    So, a computer verifier would only be useful when a proof is written in formal logic - which only a computer would do for a proof of any interesting statement. But the computer proof won't give a mathematician any understanding of why the statement is true, so the mathematician doesn't like it. Furthermore, because formal logic is so long-winded, it doesn't seem likely that a computer will ever be able to prove interesting statements by doing a simple bash out formal logic until it works approach. IMO, a computer program which can prove interesting mathematical statements in sensible amounts of time will be about as hard to write (and probably not too dissimilar to) a Turing-capable program.
  • by Zelatrix (18990) on Wednesday April 06, 2005 @11:24AM (#12154447) was impossible to have a map with 5 colors, all of which were touching one another, in the first place

    That's correct, I think. But it's not the same assertion made in the four-colour theorem, which is that you can always color the map using no more than four colours.

    Now since the four-colour theorem is actually true, I can't provide a counterexample, but imagine that you are busy colouring in your map and you get to the last area, which is bordered by at least four other areas, each of which is coloured differently. You are stuffed. Like I said, the four-colour theorem is true, so this just means you made a mistake somewhere with the rest of your colouring in - but proving that is difficult, and it's not the same as proving you can't have a map with five areas that all touch each other.

  • Re:Science by AI (Score:3, Interesting)

    by FuzzyDaddy (584528) on Wednesday April 06, 2005 @01:24PM (#12156142) Journal
    incompleteness theorem basically says that those additional axioms are every bit as good as their opposites

    Axioms don't usually have opposites. That's what makes deciding on good axioms non trivial. The parallel postulate has two possible replacements, leading to two different geometries. Neither is the opposite of the other. You could try to come up with more axioms (through every point not on a line there are exactly three lines parallel to the original line), but they might not lead to a logically consistent structure.

    Axioms are not arbitrary!

  • by PDAllen (709106) on Wednesday April 06, 2005 @01:24PM (#12156151)
    Mathematics _can be_ formalised. But no-one writes mathematics that way. First, because when a mathematician finds he has a proof the first thing on his mind is to publish ASAP to stop anyone else getting there first and taking all the credit, second because in practice writing in formal logic (or even mainly in formal logic) leads to long, boring, unclear proofs which do not enhance reputation the way a relatively short clear proof does. A boring proof is a sequence of equations and formal logic; this sort of thing is really only used when proving a little lemma that's almost obvious anyway. Most proofs will look something like the following (LaTeX) discussing a Frechet algebra F; B_n are the Banach algebras associated to the seminorms of F, the pi_n are projections from F to B_n, the d_n are the chain maps between the B_n:

    \begin{lemma} We may naturally identify continuous homomorphisms to $\mathbb C$ from $F$ and from the $B_n$, and so write
    \[\Delta(F,\mathbb C)=\bigcup_{n=1}^{\infty}\Delta(B_n,\mathbb C)\]
    If $\theta:B_n\rightarrow \mathbb C$ is a continuous homomorphism then so is $\theta\circ d_n:B_{n+1}\rightarrow \mathbb C$. Since $d_n$ has dense range, different $\theta$ give different $\theta\circ d_n$ and we may identify $\theta$ with $\theta\circ d_n$. Since also $\pi_n$ has dense range, we may identify $\theta$ with $\theta\circ \pi_n$. This allows us to write
    \[\Delta(F)\supseteq\bigcup_{n=1}^{\infty}\ Delta(B _n)\]

    and it remains only to show that there are no extra continuous homomorphisms. \\

    Suppose then that $T$ is a continuous homomorphism from $F$ to $\mathbb C$. In particular, $T$ is continuous at $0$, i.e. there is an open neighbourhood $U$ of $0$ such that $|T|1$ on $U$. This set is open in the inverse limit topology, i.e. is the intersection of the inverse limit space and an open set in the cross product of the $B_n$ with the product topology. $U$ may be assumed to be a set in a basis for the topology, so that it is the intersection of the $\pi_j^{-1}(U_j)$ for some $U_j$ open in $B_j$, up to some finite n.

    Since the $d_j$ are continuous, however, this intersection must contain $\pi_n^{-1}(V)$ for $V$ an open neighbourhood of 0 in $B_n$, the intersection of the inverse images of the $U_j$. \\

    We now show that $T=\theta\circ \pi_n$ for some $\theta\in\Delta(B_n)$, i.e. that $T(a)$ depends only on $a_n$, or that if $a_n=0$ then $T(a)=0$: if $a_n=0$, then $ka_n=0$ for all $k$, so $ka \in \pi_n^{-1}(V)$, and $|T(ka)|1$, for all $k$. But then $|T(a)|=0$ as required.
  • by sixpaw (648825) on Wednesday April 06, 2005 @01:55PM (#12156573)
    ...but it's old hat now. Seriously, we're actually coming up on the 30th anniversary of the Haken-Appel proof of the four-color theorem, the FSG classification has been around for quite a while now, and from my perspective most of the people fretting about the nature of proof these days are philosophers, not mathematicians. Most of the serious (and many of the amateur) mathematicians I know consider the computer an essential part of their toolbox; Mathematica never accidentally flips a sign while it's going through two pages of calculations for you. There are even journals devoted to computer-aided mathematical exploration: Experimental Mathematics [] is more than a decade old now.
  • Re:Science by AI (Score:3, Interesting)

    by danila (69889) on Wednesday April 06, 2005 @02:57PM (#12157283) Homepage
    No, this is basically bullshit. Penrose allows his irrational belief to determine his science, which is a big no-no for a scientist. He postulated that human mind is not a computer and grasps every straw to show how it might potentiall be so. Since he is a renowned and respected mathematician, his views are given a modicum of respect, just sufficient to guarantee he will not shut up for about 20-30 more years (by then we will probably have self-conscious strong AI that passes the Turing test and more).

    This particular argument is patently stupid, but again, since Penrose has a big name, he gets away with it. The argument is as follows. Take two tiles that can fill the plane in a unique aperiodic way (one of Penrose's achievements was coming up with such things). Give an infinite amount of such tiles to a human and a "computer". Ask them to start tiling the plane and then ask whether the tiles can in fact tile the whole plane. Since the mosaic is aperiodical, according to Penrose the computer will not see a pattern, but a human will (the pattern being that you could always add a tile (as far as you tried), so the plane must be tileable). Penrose explains this by saying that humans have some magical "human consciousness" that allows them to have some "insight" about the world, to "understand" it, while computer doesn't have it. He then proceeds to explain this with some crackpot theories about quantum effects in microtubules in the human neurons, which is total bullshit, but again, his big name allows him to get away with spouting such garbage.

    His argument is, of course, wrong. Humans can easily see patterns, because we were programmed to see them. This, of course, doesn't mean that we are always correct, when we believe there is a pattern. For example, consider the following sequence of numbers: 1, 3, 5, 7... Is there a pattern? If you think I listed odd numbers, you are wrong. It could have been a year, it could have been some random digits, it could have been a part of a different sequence, etc.

    If monkeys could understand what an Euclidean plane is, they could reach the same conclusions as humans just as quickly. 1) You place a tile, it fits. 2) You place another, it fits. Repeat 10 times. Wow, it must be a magical pattern! This is just a heuristic that we humans have. The conclusions it helps us come to are not necessarily correct. We are not better than computers in solving the halting problem. It's just that too often we are willing to shout the answer, before we can be reasonably sure. Of course, sometimes we are lucky and the answer is correct. But we can program a computer to do just the same - it will gain the magical ability to "understand" and "see" the answer, at the price of often being completely wrong.

    Penrose is just a persistent moron, don't listen to him.

Porsche: there simply is no substitute. -- Risky Business