Forgot your password?
Math Science

The End of Mathematical Proofs by Humans? 549

Posted by timothy
from the somehow-I-doubt-it dept.
vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."
This discussion has been archived. No new comments can be posted.

The End of Mathematical Proofs by Humans?

Comments Filter:
  • by carnivore302 (708545) on Wednesday April 06, 2005 @04:41AM (#12151987) Journal
    In the past, I've used the HOL Theorem Prover []. It's a nice toy to play with if want to get started in this area.
  • by willmeister (709686) on Wednesday April 06, 2005 @04:44AM (#12151996)
    I have no doubt that in 50 years (given current progress), creativity as we know it will be commonly implemented in advanced computers. Your brain is nothing more than a biologically advanced computer, complete with binary nerve signals and the works.

    Why can't human brain functions be copied and improved upon in circuitry, other than the currently prohibitive costs involved, and our failure to accurately determine the exact neuron mapping of anyone's brain. Even the dynamic neuron connections could be potentially emulated or mimiced by a computer.
  • Nothing too new? (Score:2, Informative)

    by xiaomonkey (872442) on Wednesday April 06, 2005 @04:57AM (#12152050)
    Using computer programs to prove theorems in mathematics is as old as the field of artifical intelligence itself. In fact, some of the initial excitement around AI originated from the existence of such programs as Newell and Simon's Logical Theorist and it's sucessor the General Problem Solver [].

    IIRC - back in the early days of AI (1960s), some people in the field thought that in relatively short order computers would be a major soucre of such mathematical proofs. It hasn't happened yet, but that doesn't mean that it won't eventually.
  • by g1t>>v (121036) on Wednesday April 06, 2005 @05:04AM (#12152070) Homepage
    Then you'd better check your facts ... there must be dozens (if not hundreds---erhhh, speaking about checking ones facts :-) ) of systems in which mathematical theorems are routinely proved (HOL, PVS, Coq, Mizar to name just a few)!
  • by l-ascorbic (200822) on Wednesday April 06, 2005 @05:21AM (#12152123)
    A right wing rag that backed Clinton and Kerry (albeit grudgingly), supports gay marriage, legalisation of drugs, gun control, abolishment of the death penalty..?
  • by nigham (792777) on Wednesday April 06, 2005 @05:24AM (#12152135) Homepage
    If you take a grad school AI course, they'll make you do proofs the way a computer does it... maybe using propositional logic. The idea is to break up the problem into a set of statements that looks quite ridiculous (e.g. NOT engine AND train AND NOT moving), and then taking pairs of these statements and mixing and matching. The result is that you determine your sequence steps by simple trial and error or by trying to combine the propositional symbols (AND, NOT etc) and the variables (train etc). Once you generate a proof, its just a list of such statements which evaluates to a FALSE or a TRUE value but if you want to understand the proof, its hopeless.

    I doubt the human proof system will go away completely - even if we can check nasty theorem proofs using computers, we still need humans to sit and explain what they mean.
  • by rxmd (205533) on Wednesday April 06, 2005 @05:32AM (#12152156) Homepage
    Three major obstacles with this approach (which has been tried BTW):
    • Logical problems with proofs for correctness. For example, it has been proven that no program can prove itself correct.
    • Correctness proofs are hard to do and incredibly tedious. Have you ever tried it? And no, you can't have a program do them, because you'd have to prove this program correct, which sends you right back to square #1.
    • You'd have to prove all sorts of other factors correct, including the operating system and hardware your program is running on. This leads to another set of interesting problems, including that "correct hardware" is useful only as a theoretical concept. What's a "correct computer" if there's a small probability that bits will spontaneously flip in memory, for example?
    In short: while it might seem elegant to prove the prover, then have everything else proved by this prover, this approach has little value in practice.
  • Mizar (Score:2, Informative)

    by ajb2718 (842302) on Wednesday April 06, 2005 @05:33AM (#12152160) Journal
    The most popular system for the computer-aided Formalized Mathematics is mizar []. It even has it's own journal for proofs written with it.
  • by Anonymous Coward on Wednesday April 06, 2005 @05:36AM (#12152170)
    The 4-color problem is only wrong if you require non-contiguous areas to be the same color, as might happen if you wanted to color a map of countries with Alaska being the same color as the rest of the USA.

    If you aren't talking about maps anymore, and are willing to discus arbitrary shapes, adding holes to the shape can mean it will need more colors.

    In either of these cases, you can create situtations where arbitrarily many colors are needed. However, the simple case of contiguous shapes on a continuous surface requires no more than 4 colors even though more are frequently used.

  • Re:Theorem Proving (Score:2, Informative)

    by NitsujTPU (19263) on Wednesday April 06, 2005 @05:39AM (#12152177)
    Here's a quick article on the topic. Should I have prefaced my statement with "if p != np?" I didn't think that we needed to stand to such scrutiny.

    I must, however, admit to a certain level of interest in the topic, so, if you want to have a more detailed conversation on the topic, I'd be glad to have a chit-chat sometime.

    Feel free to post your proof that P = NP, or a paper referencing a linear time FOL theorem prover or SAT solver ;-) ving []
  • by sperxios10 (848382) on Wednesday April 06, 2005 @05:50AM (#12152213) Homepage

    I've been reading so much of Penrose and here is my 2-cent about his views...

    Turing's theorem, in brief (neglecting sound-ness of algorithms), dictates that there is no single algorithm that can "prove" the non-termination of all algorithms.

    The above (combined with Godel's theorem) means that, no single program can prove all theorems , which i think is correct.

    So, you have to use different algorithms for certain problems. But in order to "devise" those new algorithms, you need to solve the (Turing's) problem of termination on those algorithms (indeed , programers-mathematicians do manage to solve it!)

    So, in essence Penrose prooved that, humans are SOMEHOW more capable than computers, since they can (in principal) solve the termination problem of all algorithms.

    Now, Penrose also believes that this capability may originate from the access of humans mind in pure truth (or what ever) but those are his beliefs, not his mathamatical proofs. You are not obliged to follow them.

  • relax (Score:2, Informative)

    by mat.h (25728) on Wednesday April 06, 2005 @05:52AM (#12152217)
    You still have to tell the computer what to prove. Much of what mathematicians do is finding the right concepts and formulating propositions. Then you can try to prove these propositions.

    A fairly basic example: several centuries passed between Newton and Leibniz introducing derivatives and Cauchy and Weierstraß arriving at the formulation that's taught in high schools nowadays (at least where I live): epsilon-delta definition of the limit and going from there. Today, this stuff and the accompanying proofs look pretty simple (and they are), but without the right formalism/mental model, playing with infinitesimal quantities is somewhat of a black art. It took a long time to get there, and computers wouldn't have helped much.

  • Godel? (Score:2, Informative)

    by igb (28052) on Wednesday April 06, 2005 @06:24AM (#12152312)
    Automated theorem proving was the purpose of
    Russel and Whitehead's work in Principia Mathematica, amplified by the Hilbert Programme.
    Godel exploded that little lot by showing how
    any formal system powerful enough to represent
    anything worthwhile was powerful enough to
    contain contradictions.

    Even simple `proof' like n-satisfiability or
    showing that a string is a member of a language
    defined by a context-free grammar is NP complete.


  • by spagetti_code (773137) on Wednesday April 06, 2005 @06:24AM (#12152314)
    Many proofs have been thought to have been found, only to be proven wrong years later. And that was only after years of study my mathematicians of the time. For example, from the article: this particular puzzle was twice 'solved' only to be 'unsolved' 11 years later. Consider also Wilkes proof for fermats conjecture - it was proven wrong and had to be redone.

    So the question is - how are we going to prove/disprove a computer program that proves a theorem? Program complexity has meant that all but the most trivial programs cannot be 'proven'.

    The solution, it seems to me, is per the article: get the s/w to output a series of steps using formal logic. Any series of formal logic steps should be confirmable by a 'formal logic validator', and that is the only program we need to prove correct. That will be non-trivial, but will then open the floodgates to any hacked up piece of code to be used to generate provable logic.

  • by Maljin Jolt (746064) on Wednesday April 06, 2005 @06:28AM (#12152324) Journal
    A functional Haskell language is the tool of the day. However, there is nothing in Haskell that could not be done in Prolog with proper problem domain language defined by grammar. H. is just fancy looking, out of the box. Unlike Lisp.
    reverse :: [a] -> [a]
    reverse = foldl (flip (:)) []
    applied to a finite list of any type, returns a list of the same elements in reverse order.
  • by A beautiful mind (821714) on Wednesday April 06, 2005 @06:50AM (#12152389)
    Um. A mathematical proof is just a sequence. It doesn't have to be accepted to be true. According to the axiomatic method of proving ,you just need these things:
    • An abc (not THE abc, just one)
    • formulas
    • derivation rules
    An axiom is just a derivation rule which derives from the empty set.

    Basically, a proof is, according to the axiomatic method, is just a non-infinite sequence of formulas, which can be created by the allowed derivation rules. The whole point is that a proof for A HUMAN, and mathematical proof is different. The axiomatic system is not perfect, either. The whole Hilbert []-plan is proven to be impossible to be done, thus it is not possible to prove that there are no contradictions in the axiomatic system

    I think the "MUI" axiom system is commonly used to demonstrate how it works, basically. It is too lengthy, and i'm lazy.
  • by nikitad (693684) on Wednesday April 06, 2005 @06:57AM (#12152406) Homepage
    If mathematical proofs were "nothing but a manner of convincing someone", how could it be verifiable by a computer?

    Fully formal mathematical proofs depend on nothing but ability to distinguish characters, to compare text strings, and to perform substring substitution.

    To your example (2 + 2 = 4). In formal arithmetics, based on Peano axioms, there is one primary operator, let's call it s:N -> N, and s(n) is interpreted as (n + 1). "2" is _defined_ as s(1). 3 is defined as s(2), and 4 is defined as s(3). So one has to prove that s(1) + s(1) = s(s(s(1))).

    By definition of addition (remember, addition is not fundamental notion in the formal arithmetics, it's defined in terms of s-operator), a + s(b) = s(a + b), and a + 1 = s(a), so we have

    s(s(1) + 1) = s(s(s(1))),
    s(s(s(1))) = s(s(s(1)))


    So, where proof above depends on anything but mechanically verifiable string manipulations?

    P.S., of course mathematical formulae are not strings, but rather trees, but this doesn't change a bit.
  • Penrose (Score:1, Informative)

    by Anonymous Coward on Wednesday April 06, 2005 @06:59AM (#12152412)
    A dekko at Penrose's "Emperor's New Mind" and "Shadows of Mind" is worthwhile. He presents what are (IMHO) fairly convicing arguments that what mathematicians do (actually, what humans do, but he uses mathematicians as the basis for his argument) is not computable (in the Turing sense).
  • by Anonymous Coward on Wednesday April 06, 2005 @07:06AM (#12152428)
    Yeah, it's definitely generally believed that Fermat was full of shit. For one thing, he later published a proof for x^4+y^4=z^4 does not exist. If he had a proof for the generalized form that he claimed to have, why would he bother publishing this more specific, far less interesting, proof?
  • Re:Science by AI (Score:5, Informative)

    by Cyberax (705495) on Wednesday April 06, 2005 @07:16AM (#12152457)
    Well, Economist should learn some REAL math. The first thing they should learn is math logic.

    It has been PROVEN (and it's a well-known fact) that it's impossible to create a Turing machine which will determine if a given expression is true or false (see Incompleteness theorem [] for details).

    For example, it's impossible to find answer to CH [] (continuum hypotesis) in ZFC (Zermelo-Fraenkel + Choice axiomatics).

    In short: some problems can't be solved in existing theories, they require creating a new theories with new axioms. It's non-formalizable process (it's also proven), so no computer can do this.
  • by Anonymous Coward on Wednesday April 06, 2005 @07:24AM (#12152488)
    full of shit is a bit of an exageration

    the works we have from fermat are ones that he never intended to be published

    imo the most likely explanation is that he found a false proof and later realised it was false but for some reason did not destroy that one note.

  • by g1t>>v (121036) on Wednesday April 06, 2005 @07:31AM (#12152521) Homepage
    Yes but a "correct mathematical proof" only establishes truth relative to the axiom system used. Because how will you ever prove that the axioms are true? (That's why they're called axioms---you cannot prove them and just have to assume the're "true".) In other words, there's no such thing as "absolute truth". (This is what Hilbert meant when he defined mathematics as "Mathematics is that subject in which we do not know what we are talking about, nor whether what we say is true.")
  • by Bazzalisk (869812) on Wednesday April 06, 2005 @07:33AM (#12152524) Homepage
    In fact Ramanujan thought he had a proof, which would have been doable using only things known in Fermat's day and was short elegant and pretty.

    Unfortunately, as Hardy pointed out, that proof assumed that all Quadratic extensions of the rational numbers are Unique Factorisation domains - which isn't true ;-/

    It seems very likely that Fermat's proof was probably of a very similar sort.
  • by kisak (524062) on Wednesday April 06, 2005 @08:11AM (#12152667) Homepage Journal
    What was Fermats proof (if it existed)? It would surely have been far more elegant than the modern version. That doesn't make the modern version wrong, just less pure, I feel.

    Mathematicians think they know what Fermat thought was the "proof" that he could not fit in the margin, since Fermat used a similar strategy for another problem. Euler was the one who used Fermat's strategy on Fermat's last theorem explicitly, and showed that it did not give a full proof as Fermat had hoped. It might be that Fermat himself tried and then gave up, or that he was happy to have "solved it" and looked for other things to prove.

    I think you (and most people) misunderstand the reason Fermat's last theorem has such a central place in math history. But first lets discuss the reason why the problem became so well known; it is because it is such an easy problem to state and to understand, still no one has been able to use "simple" math to prove it. Even Fermat himself thought the problem should be fairly straigth forward to solve, and it has made generations of people with curiosity for math look for proofs and even thinking they have found one. It is also a problem some of the greatest minds of math did not manage to solve. Fermat, Euler, Lagrange, Gauss, Abel, Riemann, etc have all had a try and did not solve it. Which math wanna-be wouldn't want to solve something this group of people did not manage?

    Now, even though this has made Fermat's problem something that has created a lot of publicity, the number one reason Fermat's problem has been important, is because of all the beautiful maths that have been discovered by mathematicians trying to solve it. Fermat's theorem in itself is not interesting. It is not like the Riemann hypothesis, which if proven to be false, will make much of modern maths not true or at least force mathematicians to look for new proofs. This is because you can prove much interesting stuff if you assume the Riemann's hypothesis is true, problem is of course, this is not yet proven. If Fermat's theorem was been shown to be not true, that would have been suprising, but would not made large parts of maths collapse.

    So, the modern proof of Fermat's theorem is the end of a long journey which has lead to some very deep mathematical knowledge, and in a way, Wiles' proof is much more interesting in its own right than that it also proofs that Fermat was right in his guess. A "simple" proof (watch out when mathematicians use the word simple or trivial) of Fermat's problem would give undeniable bragging rights, since you could say you solved a problem Gauss couldn't solve with the maths Gauss knew. But again, it probably would be more of a huge accomplishment for one person than a huge breakthrough in maths.

    A last comment; the reason Wiles' proof is long is not because math is verbose, far from it . It is because Wiles is able to connect what would seem to be two unconnected branches of mathematics, showing that problems in one of the branches can be restated as problems in the other. This is not something you do in a few pages. And the importance of it becomse clear, if you consider that what can be an unsolveable problem in the one branch of maths might be reformulated as a solveable problem in the other. Math is always about trying to find ways to solve something as simply as possible. Not something computers is very good at, so no Abel prize to Big Blue for a while I think.

  • Re:Science by AI (Score:3, Informative)

    by dynamol (867859) on Wednesday April 06, 2005 @08:19AM (#12152709) Homepage
    It is quite a large step to go from raw computing power to intteligence. Sure computers will eventually have way more processing power than we humans do..hell they do right now if you assign them to a certain task, but that is a far cry from being intellent. With that said I do side with the camp that says computers will do most mathematical proofs in the coming decades...why? Because researchers will find a way to get computers focused on this task...and as I already mentioned computers are way more powerful than our brains on a focused task.
  • Not happening yet (Score:2, Informative)

    by kitty tape (442039) on Wednesday April 06, 2005 @08:35AM (#12152796) Journal
    Being at least tangentially in the field of theorem proving, I can say that this is completely ridiculous. Replacing human proofs is not going to be possible until we at least have mechanical provers which can really do induction. Many cannot do it at all, none can do it well. Furthermore, quantifier heuristics are often shaky at best. This is the wall that the software verification community has been hitting for years.
  • by Kjella (173770) on Wednesday April 06, 2005 @08:52AM (#12152902) Homepage would also need a bunch of biologists that can tell you exactly how the brain works, and a group of computer programmers that can translate that into code. Also remember that a brain is a living organism where the neural pathways are changing. In a computer, you must either connect all-to-all, or have a computer that can rewrite its own data paths.

  • by Len Budney (787422) on Wednesday April 06, 2005 @09:28AM (#12153139)

    If you can't independently examine and verify your "proof" then how can it be considered proof of anything?

    That's easy. Speaking as a PhD mathematician, there's nothing disturbing at all about these computer proofs. They're examples in which a computer was programmed to generate a perfectly standard proof, except that it's extremely long.

    Checking the proof is not hard: it suffices to verify that the program emits only correct inferences. That's nothing more than a standard human-generated proof. In addition, a verifier can be coded by someone other than the original author, to check the validity of the inferences generated by the first program. The checker's algorithm can also be verified using a standard human proof, and would be used to confirm that a bug didn't result in an incorrect proof.

    Note that Gödel's incompleteness theorem has nothing to do with these programs: they don't generate all possible proofs. They only generate one specific type of proof per program. Each program is easy to verify.

    You could call the software correctness proofs "meta-proofs", but that's just being coy. They're perfectly legitimate proofs, and they are sufficient to prove the correctness of proofs generated by the program.

  • Belief in a proof (Score:2, Informative)

    by vagabond_gr (762469) on Wednesday April 06, 2005 @09:34AM (#12153173)
    As contradictory as it may sound, there is always a notion of "belief" in a proof. "Proving" essentially menas "persuading everyone that something is true" and the only difference between mathematics and other sciences is that the former constructs everything from very elementary axioms using basic induction rules, so that it is (usually) clear that a proof is correct.

    However it is common to use deduction rules incorrectly, theorems without meeting all preconditions, etc. When a proof becomes complicated, doubts arise.

    But that's exactly why we can have much more faith in computer proofs. A computer can only use very specific proof techniques, written formally in some kind of logic. All theorems must also be given very precisely. And the proof can be also checked mechanically, possibly using different software to avoid bugs etc. Size is irrelevent, as long as the basic proof technique is rock solid.

    On the other hand machines are stupid, you need humans to program them and prove the correctness of every proof technique they use.

    PS. Reading this article was very interesting, given that I'm currently attending a computer science conference (ETAPS) one of the main topics of which is logic and computer proofs :)
  • by Anonymous Coward on Wednesday April 06, 2005 @09:34AM (#12153174)
    The "defenders" argument in this case strikes me as a bit odd. While I'm not completely conversant with the basic theory behind this, the very rough approximation (better mathematicians please correct me), but the Classification is a huge work for the simple reason that it attempts to cover a huge amount of territory. Simple groups are one of the fundamental "building blocks" of groups, somewhat akin to prime numbers for the integers. While not all groups are built up from simple groups, they are fundamental to group theory, and there's a boatload of them (including one very strange one called the Monster Group, a simple group of order 80801742479451287588645990496171075700575436800000 0000 oup [])

    Were such a classification attempted by computers, it would sweel in size geometrically (the "proof" as it stands is a body of work numbering roughly 10,000 pages of advanced, dense mathematics. People say that roughly four people in the world understand the proof fully). If it were done by computer, probably no one would understand it. Plus, as was demonstrated with the Monster group, computations to verify some of the assumptions would be prohibitively expensive. Possibly on the order of breaking some strongish crypto.
  • Re:Science by AI (Score:3, Informative)

    by Jagasian (129329) on Wednesday April 06, 2005 @10:03AM (#12153446)
    Yes you are grossly incorrect. Most neural networks do not scale, and in fact perform best when they are smaller in size. Furthermore, they are such a gross approximation of a human's neurons that they don't really model them at all. Finally, neural networks do a very simple and basic operation, typically they associate a numerical value with another numerical value. There is allot more to intelligence than that!
  • by call -151 (230520) on Wednesday April 06, 2005 @10:09AM (#12153509) Homepage
    Actually, the proofs under discussion are not generated by a computer nor do they use "computer reasoning systems" like Prolog. These are computer-aided proofs, where the outline of the general argument is constucted by a (human) mathematician, but some of the details are reduced to computations, which are done by computer. I don't know in what language these particular computations were done, but I know that the computations for the computer-aided proof of the double-bubble conjecture [] was done in C++, and the source code is available [] if you want to have a look. I am familiar with other such computer-aided proofs that use a diverse set of languages, including C, assembly, Perl, GAP, Magnus, Mathematica, AXIOM, and many others. Mathematicians, like other researchers, tend to have diverse opinions about what suitable computational tools are and also tend to use the tools with which they are most familiar, even if that tool is not really optimal for that computational task.
  • by Anonymous Coward on Wednesday April 06, 2005 @10:49AM (#12154026)
    Actually there's at least two ways.

    The HOL group treats proofs as a data type. New instances of this type can only be constructed using a small set of inference rules. Therefore, if the system creates a proof, then you know that it has been created only by using these inference rules. Hence, trusting the inference rules is sufficient to trust whatever proofs are offered.

    A second way is the use of "reflection". That is, suppose that you had a "tautology checker" program which, given some proposition, returns true only if that statement is a mathematical tautology. And, suppose that you prove this program works correctly. Then, you can always trust the results of this program, and so any time it says that something is a tautology, you can believe it becuase of the proof you have done.

    There is an excellent paper by John Harrison that discusses these issues, which you can read here: tml []
  • by Anonymous Coward on Wednesday April 06, 2005 @11:44AM (#12154687)
    Fancy shmancy.

    (defun rev (a &optional res)
    (if (null a)
    res ;; haskell line 1
    (rev (rest a) (cons (first a) res))) ;; haskell line 2

    This reminds me of when someone argued that the Haskell example of Quicksort couldn't be done as elegantly in Lisp. Turns out it's trivial to write it *exactly* the same way in Lisp as in Haskell, and it's shorter to boot.
  • by TedCheshireAcad (311748) <ted&fc,rit,edu> on Wednesday April 06, 2005 @12:02PM (#12154957) Homepage
    Do you have a proof for 1=1?

    = is an equivalence relation which is, by definition, reflexive.
  • Re:Science by AI (Score:3, Informative)

    by astar (203020) <> on Wednesday April 06, 2005 @12:35PM (#12155447) Homepage
    This thread tends to confuse provability with true, a distinction that Godel clarified. Godel's result applies to any interesting formal system, of which logic is one, The result is that in any interesting formal system, some things are unproveable but we believe them to be true and some things are proveable and their negation is proveable also. So it is hard to make the further claim that any formal system can be true.

    The continuum hypothesis, due to Cantor, has been shown, partly by Godel, to be consistent with the usual axioms, but not proveable. The continuum hypothesis is that the power set of the integers is equal to the number of points on a line. Godel spend his last years searching for the axiom to add to the standard axioms, such that the continuum hypothesis would be proveable.

    Godel was a Platoist and considered intuition to be necessary to mathematics, and his theorem is generally considered to prove this. One should be cautious in that intutition is a technical terms, and is a generalization of Kantian intuition.

    Here is my ideosyncric take. Timaeous by Plato lays down the claim generally called the hypothesis of the higher hypothesis. In case of contradiction, there is an axiom that can be added to resolve the contradiction. Godel says in effect, it is good there is such an axiom, since you have to add it. And the process of adding it involves creativity. Take this with a grain of salt, but Timaeous was a favorite of Godel.

    Regarding the limits of Godel's proof, it does not necessarily apply to infinitely large axiom sets. So perhaps God has an infinite axiom set, in contradiction to the usual claim that God does not use axioms.
  • Re:Science by AI (Score:4, Informative)

    by danila (69889) on Wednesday April 06, 2005 @01:06PM (#12155912) Homepage

    The theorem says that there are either true unprovable things or things that are both provable and provable to be false. An interesting formal system is either incomplete or contradictory (it can be both, but it doesn't have to).
  • Re:Science by AI (Score:2, Informative)

    by sixpaw (648825) on Wednesday April 06, 2005 @01:35PM (#12156316)
    However, the fact that there are some truths that are literally inaccessible from the postulates certainly suggests that there may be others that are accessible only by a very large number of steps, effectively requiring computers. I wonder if anybody has ever attempted to prove this?
    Yes; in fact, this is true for even very simple logical systems. Presburger Arithmetic (the theory of integers with addition but no multiplication) has theorems that can only be decided in double-exponential time -- i.e., O(2^2^(kn)), where n is the length of the theorem in question. Here's Wikipedia's page on it. []
  • by Anonymous Coward on Wednesday April 06, 2005 @01:45PM (#12156437)
    You might also want to take a look at Twelf [].
  • by sixpaw (648825) on Wednesday April 06, 2005 @01:46PM (#12156459)
    While what you've proven isn't the four-color theorem (as another poster noted), you have shown a piece of another mathematical problem, Kurtatowski's theorem; the complete graph on five nodes (i.e., a graph of five nodes where each one is linked to every other) can't be drawn in the plane. The rest of the theorem says essentially that the six-node graph where each node in one group of three nodes is connected to each of the three nodes in the other group (known as 'K(3,3)') also can't be drawn in the plane (this is sometimes called the 'Utility Problem' -- try to connect 3 utilities to 3 houses without crossing lines), and furthermore any graph that can't be drawn in the plane has a piece that 'looks like' one of these two graphs, in a mathematically formalizable sense.
  • Re:Science by AI (Score:2, Informative)

    by jerald_hams (725369) on Wednesday April 06, 2005 @01:47PM (#12156471) Journal
    I see these brain-computer processing power comparisons on slashdot often, and they make me cringe.

    The brain is *not* a serial computer, and cannot be compared to one in any meaningful way.

    1) Parallel computations cannot necessarily be simulated on a serial computer. ( [])

    2) Why do you assume that a neuron can be simulated with a multiply-add operation? Even if the perceptron model of the neuron, where a scalar output is the linear combination of scalar inputs, had anything to do with the brain (which it almost certainly doesn't), you'd need a variable-length addition (between 1 and 100,000 inputs). Furthermore, to make the perceptron useful, you need some form of non-linear transformation on the output.

    3) Many of the 1e11 cells you're including in your calculation are used for specific, non-conscious tasks.

    4) What about the 1e13 neuroglia, whose function is still poorly understood, but have been shown to have an active role in the brain's activity? (reacting to and initiating neural activity)

    5) 1e2 "operations per seconds" (firing rate?). Action potentials are not operations performed on data, they are the data.

    Ugh, I could go on for a while...

    But please, please....the brain != serial computer. The brain is not comparable to a serial computer. And they can be compared, no one today knows enough to do it. So stop.

Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. -- Bertrand Russell