Forgot your password?
typodupeerror
Math Supercomputing Science

Achieving Mathematical Proofs Via Computers 209

Posted by timothy
from the send-picture-of-abacus dept.
eldavojohn writes "A special issue of Notices of the American Mathematical Society (AMS) provides four beautiful articles illustrating formal proof by computation. PhysOrg has a simpler article on these assistant mathematical computer programs and states 'One long-term dream is to have formal proofs of all of the central theorems in mathematics. Thomas Hales, one of the authors writing in the Notices, says that such a collection of proofs would be akin to the sequencing of the mathematical genome.' You may recall a similar quest we discussed."
This discussion has been archived. No new comments can be posted.

Achieving Mathematical Proofs Via Computers

Comments Filter:
  • godelstheorem? (Score:4, Insightful)

    by retchdog (1319261) on Thursday November 06, 2008 @05:52PM (#25667165) Journal

    Why is this tagged "godelstheorem"? It's not like incompleteness magically applies only to electronic computers, as opposed to meatbags...

    • Re:godelstheorem? (Score:5, Insightful)

      by QuantumG (50515) * <qg@biodome.org> on Thursday November 06, 2008 @06:02PM (#25667293) Homepage Journal

      I honestly think they need to stop teaching the halting problem to freshmen CS majors. They're just too inexperienced to understand that theory and practice are two different things.. so this whole "limits of computation" thing stifles their enthusiasm.

      • Re:godelstheorem? (Score:5, Insightful)

        by AWhistler (597388) on Thursday November 06, 2008 @06:21PM (#25667603)
        Not only that, but people should stop using this as a crutch in general. The journey is worth the effort, even knowing that you can never reach the end. This is why I agreed with Godel, Escher, Bach by Hofstadter and disagreed with The Emporer's New Mind by Penrose.

        One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible, while Hofstadter said that it is possible to successively approximate something intelligent, and we can learn a LOT about ourselves in the attempts, and that in itself is worth it.

        At least that is one of the many things I got from the two books.
        • Re:godelstheorem? (Score:5, Insightful)

          by BitterOak (537666) on Thursday November 06, 2008 @06:56PM (#25668121)

          One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible, while Hofstadter said that it is possible to successively approximate something intelligent, and we can learn a LOT about ourselves in the attempts, and that in itself is worth it.

          Not only that, but Penrose doesn't offer any actual proof that AI is impossible, merely speculative reasoning. Therefore it seems doubly important that we continue our attempts to advance AI. Firstly, that the journey itself is worth the effort, and secondly we really need to find out for ourselves if it is possible.

          • Re:godelstheorem? (Score:5, Informative)

            by kohaku (797652) on Thursday November 06, 2008 @07:47PM (#25668879)
            Why should AI be impossible? Surely it must be possible, with the human brain as evidence? One might argue that the brain is merely a biological computer not fully understood. Unless there is a "higher intelligence" giving humans beings thoughts, then AI must be possible. I suppose this is analogous to a Human "telling" a machine what to think, though, and gives rise to the "who created the creator" argument, but that's getting a little offtopic...
            This comment is somewhat similar to one below [slashdot.org] for which I apologise, but I wanted to expand on the point slightly :)
            • Re:godelstheorem? (Score:4, Interesting)

              by Eli Gottlieb (917758) <<moc.liamg> <ta> <beilttogile>> on Friday November 07, 2008 @12:07AM (#25671503) Homepage Journal

              One might argue that the brain is merely a biological computer not fully understood.

              Or, perhaps, humans could be nondeterministic and therefore not computers at all.

              • Re: (Score:2, Interesting)

                While computers are nothing more than finite state machines (with an insane amount of states), their theoretical base can be found in Turing Machines and.... those come in a non deterministic flavour [wikipedia.org]. Take note at the "Equivalence with DTMs" sections.

                So, if the human brain works non-deterministically, it can be simulated by a deterministic "brain". If this doesn't require infinite amount of states (considering what we know of the universe, this most likely won't be the case), it can thus be simulated agai

            • by Xest (935314)

              Realistically I think what we'll see is convergence. We'll start to produce biological computers and we'll understand how to develop our own biological systems as such. Alternatively quantum computing may indeed provide computers powerful enough develop systems capable of acting just like real, complex biological systems.

              Then it just comes down to the usual argument of semantics. Have you created artificial intelligence or has artificial intellignece failed because you instead just created real intelligence

            • by umghhh (965931)

              I would actually welcome if somebody had invented intelligence. I do not care whether it is called artificial or else what but at least we would have something. What we have now instead - well look around for answer.

        • Re:godelstheorem? (Score:5, Insightful)

          by Draek (916851) on Thursday November 06, 2008 @08:05PM (#25669113)

          One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible

          But wouldn't Godel's theorem imply that it'd be impossible to build a flawless, all-encompasing intelligence, not necessarily an imperfect one? fsck, even I as a human (allegedly the "superior" intelligence) sometimes feel that my decisions are based solely on the output of a Random() call in my brain rather than logical thought, no reason why a machine has to be different.

          AI isn't about trying to build God, it's just about something that can learn new stuff, or at least that's my take of it.

          • by Kagura (843695)

            AI isn't about trying to build God ...

            I hope we reach the singularity in my lifetime. But considering the state of AI at this point in time, I will be long dead before immortality arrives.

          • Re:godelstheorem? (Score:5, Insightful)

            by jonaskoelker (922170) <jonaskoelker@gn u . org> on Thursday November 06, 2008 @08:58PM (#25669757) Homepage

            But wouldn't Godel's theorem imply that it'd be impossible to build a flawless, all-encompasing intelligence, not necessarily an imperfect one?

            Gödel's theorem's concludes, simplified a bit, that it's impossible to know the truth, the whole truth and nothing but the truth about sufficiently complex math. In this context, sufficiently complex means "anything that includes addition and multiplication of natural numbers".

            An AI may be able to prove that sum(range(1, n+1)) == n*(n+1)//2 for all natural numbers n; that is, it may output a string that's a valid proof. Smart ones can, dumb ones can't. Just like humans. Intelligence is what limits you.

            But if a set of axioms and inference rules don't allow for a proof of a given theorem T, no matter how smart you are, even if your silicon brain is smarter than all of mankind, monkeykind and birdkind added up, you can't transcend any limitation that's found wholly outside your intelligence. As in this case, where it's the nature of the mathematics you've made that prevents T from being proven, and not your inability to find the proof.

            Similarly, if we agree that no evidence or reasoning can prove or disprove the existence of god, then no AI can know whether god exists: it's not your knowledge or ability to reason that limits you, it's the nature of knowledge and reasoning itself.

            Looked upon that way, Gödel's theorem doesn't say anything about AI. It says something about the world.

            I don't know what a "perfect" intelligence is. One that can solve all NP problems in O(1) time and space? Or s/NP/Recursive/? Or just something that can solve all recursive problems in finite time? To implement that, all we lack is the ability to store unbounded information in a world of finitely many atoms. Can intelligence do something more than Turing machines? Does that mean the answer is no? Or that we need to connect nerves to the PCI bus?

            • by Draek (916851)

              Exactly. Now, my question is, is an AI required to answer such questions? is an AI even required to be able to add two numbers? Godel's theorem as I understand it essentially states that any system (in this case, our AI), when faced with some questions, will either ignore the answer or give an answer that contradicts itself, but is that a problem?

              As you say, Godel's theorem says more about the world than it does about intelligences per se, and if not being able to prove or disprove a specific statement mean

            • Nice post. Some points:

              [...] To implement that, all we lack is the ability to store unbounded information in a world of finitely many atoms.

              Well, even if we had infinitely many atoms, we would still have problems. The Schwarzschild bound [wikipedia.org] limits the amount of energy that can be stored in a volume of space before creating a black hole. So, unless you create a magical way to store information without wasting energy, there's a limit on the amount of information you can store in a given space. Still, this is only a problem if you don't want to wait forever: if you allow unbounded space, the time of computation will be unbounde

              • by russotto (537200)

                Well, that's still very much an open question, but notice that we (humans) can't seem to think of anything better than Turing machines, at least not yet.

                The Church-Turing thesis states that there isn't any computing device more powerful than a turing machine. It's unproven, however. Also, a google search for "more powerful than a turing machine" brings up a claim that a "real valued neural network" (that is, a neural network where the weights can take on any real-numbered value rather than rational-numb

          • my decisions are based solely on the output of a Random() call in my brain rather than logical thought, no reason why a machine has to be different.

            It's not random, it's just that every ten seconds you get a hard disk and feel like doing an integrity check.

            No reason why a machine has to be different. [zomfg, I just googled "mecha porn". Rule 34 works, biatches].

        • by retchdog (1319261)

          I heard that the Fields medalist Michael Freedman was trying to prove P!=NP, by reducing the problem to a corollary of Godel's result (the idea being to construct a mapping from NP-problems to "true statements", such that P-problems wind up in the subset of "provable statements").

          The notion of using the biggest "negative" result in mathematics to solve one of the most elusive problems is truly inspiring.

          I don't know if it'll work though, and I haven't heard much since a few years ago. I've long since given

        • Penrose, Hofstadter and you all share a basic assumption: that there exists a "real" property that the word "intelligence" denotes. I think that assumption is flawed.

          The alternative view is that "intelligence" is just a term in a cultural classificatory scheme. This implies several things:

          1. There isn't really a set of necessary and sufficient conditions for something to be "intelligent." What the various uses of the word share is a set of family resemblances [wikipedia.org].
          2. The classification is tied to a bunch of c
          • by Thiez (1281866)

            > The derogation of women's and minorities' cognition does continue, however, but expressed in newer terms: instead of saying that women and negroes experience "henids" instead of "thinking in ideas," people today say that women or African-Americans on average have lower IQ than white men, and that IQ is subject to significant inheritance (see Lawrence Summers or The Bell Curve)

            Let's be honest, the IQ thing is true. But the differences between men and women are rather small (3-4 points according to wikip

      • by Chris Burke (6130)

        I honestly think they need to stop teaching the halting problem to freshmen CS majors. They're just too inexperienced to understand that theory and practice are two different things.. so this whole "limits of computation" thing stifles their enthusiasm.

        I suppose. Personally I find it very useful in explaining why, generally speaking, the only way to really know how a program will respond to certain inputs is to test it. Theory and practice are not all that far apart. In theory, the halting problem only s

    • Re: (Score:3, Interesting)

      Of course this applies. No, Gödel's theorem doesn't just apply to computers, but it does apply too!

      And it is actually a very relevant point. Since Gödel proved that a formal system must be either incomplete or inconsistent -- and we do not really know which one applies to mathematics -- it may be that a "central" theorem of mathematics will contradict some other finding, later, which turns out to be equally central.

      Not very likely, but who knows?
      • Re: (Score:2, Informative)

        by ComaVN (325750)

        a formal system must be either incomplete or inconsistent -- and we do not really know which one applies to mathematics --

        Except that we do, and it's incomplete.

        Now, as to which applies to meatbags, my guess is they're both incomplete and inconsistent.

      • Re:godelstheorem? (Score:5, Informative)

        by exp(pi*sqrt(163)) (613870) on Thursday November 06, 2008 @06:27PM (#25667685) Journal

        We already know that a statement like the continuum hypothesis can be proved neither true nor false from ZFC. We don't need Godel's theorem to prove "incompleteness or inconsistency" when we have lots of practical examples of its incompleteness. So your claim of "don't know" is false.

        First order logic is consistent and complete. In fact, Godel proved this. Additionally, consider the set S of all possible propositions of set theory. Somewhere in the power set of S must lie the set of all true propositions. Just take these as your axioms. Voila, a formal system that is both complete and consistent. Your characterization of Godel's theorem is incorrect.

        • Re: (Score:3, Insightful)

          My understanding was that Gödel proved that S could not contain itself, which was the downfall of that very argument. Consider a statement about S: "S contains all possible propositions." That proposition may be true, but it is not possible to prove that proposition within S, so S is incomplete. Q.E.D.

          That is a brief summary but even if I misunderstood that, a proven incompleteness by itself still does not prove consistency. They are not mutually exclusive.
          • Propositions are very simple things. They're just strings of characters. You can easily write a computer program to generate all propositions and there's no problem working with the set of all propositions. Given any set, the set of all subsets exists. This is the Power Set Axiom [wikipedia.org]. This set contains every possible set of propositions, so it certainly contains the set of all true propositions.

            An important part of Godel's incompleteness theorems that is often overlooked is that the set of axioms has to be recu

            • That may all be true, but it does not apply to the comment I originally made, which had little to do with the original article (please go back and look). I was replying to someone who was asking why Gödel's theorem should apply to computers, since they were not people. (That did, indeed, appear to be what he was asking.)

              My reply was that it applies to computers too. But of course it is a given that it applies in the same way: it would apply to formal systems that might be concocted by computer, as m
            • The other example I gave was first order logic. The incompleteness theorems don't talk about this either because you can't do arithmetic in first order logic. The theorems only apply to formal systems in which you can do arithmetic. Check out the wording at wikipedia [wikipedia.org].

              Correct me if I'm wrong, but doesn't Godel's second theorem explicitly address Peano arithmetic, which is all first order logic?

              True, you can't address countability (and by extension the halting problem) completely within first

              • by gtall (79522)

                Peano arithmetic does indeed use first order logic. But it adds extra axioms and an axiom scheme (all instances of induction). It's the additional stuff.

                Goedel's proof assumes a system of a certain complexity, usually one that can code arithmetic. One cannot code arithmetic in just first-order logic since the arithmetic axioms and scheme are not provable in first-order logic.

                Gerry

        • I meant "inconsistency and incompleteness are not mutually exclusive".

          On the contrary, they often go hand-in-hand.
        • We already know that a statement like the continuum hypothesis can be proved neither true nor false from ZFC. We don't need Godel's theorem to prove "incompleteness or inconsistency" when we have lots of practical examples of its incompleteness. So your claim of "don't know" is false.

          Well said. As another example, the "C" of ZFC (axiom of choice) is independent of ZF.

          First order logic is consistent and complete. In fact, Godel proved this.

          No, he didn't. This is a common misconception because of the name of his "Completeness Theorem".

          The "Completeness Theorem" simply states that whatever you can prove (syntactically) with first order logic within a system is exactly what you would accept (intuitively) as true. This is important because it guarantees that if you apply first order logic within a set of (true) axioms, you can only reach things you would normally

          • by gtall (79522)

            What you are calling Completeness is Soundness.

            Soundness: If I can prove it, then it is true.

            Completeness: If it is true, then it is provable.

            Gerry

            • What you are calling Completeness is Soundness.

              Not quite. Soundness is the converse of Completeness (in this sense, NOT in the sense of the Incompleteness Theorem):

              • Completeness (in the sense of the Completeness Theorem): if it follows by reasoning (i.e., it is "true"), it is also provable applying first order logic to the axioms.
              • Soundness: if it is provable using first order logic to the axioms, it also follows by reasoning (i.e., it is "true").

              It's important to note that First Order Logic is not an axiomatic system per se, it is only a way to derive

      • by hardburn (141468)

        Since GÃdel proved that a formal system must be either incomplete or inconsistent . . .

        There's a third possibility, which is "not powerful enough to be really interesting".

      • Since Gödel proved that a formal system must be either incomplete or inconsistent -- and we do not really know which one applies to mathematics

        That's a strange characterization. Gödel's first theorem demonstrated that any system will have such premises, using logic much like how the diagonal proof demonstrates that the the number of irrational numbers is greater than aleph-null infinity. And, just like computers could sit down and work forever on just rational numbers, computers could sit down

  • by Hatta (162192) on Thursday November 06, 2008 @05:54PM (#25667187) Journal

    Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

    • by eldavojohn (898314) * <eldavojohnNO@SPAMgmail.com> on Thursday November 06, 2008 @05:58PM (#25667227) Journal

      Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

      Just because a list is incomplete doesn't make it any less useful, does it?

      I'm certain Wikipedia, any of Google's search results & my list of liqueurs to pick up on the way home are all incomplete ... although they are all highly useful.

      To answer your question, I would start with all the theorems that are the most employed/referenced by other theorems and work from there. Who knows what might turn up?

    • by jfengel (409917) on Thursday November 06, 2008 @06:03PM (#25667307) Homepage Journal

      TFA also says:

      When it comes to central, well known results, the proofs are especially well checked and errors are eventually found. Nevertheless the history of mathematics has many stories about false results that went undetected for a long time.

      In other words... maybe, just maybe, there's a hole in one of the theorems we use all the time, and an error would have severe ramifications. Or at the very least, there might be an opportunity lurking in an assumption we didn't realize we were making, like the way non-Euclidean geometry was just sitting there waiting to be discovered.

      They're not really "limiting" it to central theorems, so they don't need a definition. But the more important (read, broadly-used) a theorem is, the more interesting it will be to have the proofs double-checked by a computer.

      That computer proof may itself be wrong if the programmers are wrong, but as with the proofs we already have, it's just one more set of "eyes" looking for problems.

    • Re: (Score:2, Interesting)

      > Godel of course proved that you can never have a complete list of all true statements in mathematics

      It doesn't need Godel to figure out that you can't make a list of all possible true statements in mathematics. Given that even a child knows that there are an infinite number of true statements I can only think that you cite Godel in an attempt to give your trivial observation an exaggerated air of authority.

      • You must be trolling. As you well know there are many instances of infinite sets that can be enumerated effectively.

    • Godel of course proved that you can never have a complete list of all true statements in mathematics.

      I didn't know that was what he really proved, but in any case, why do we have to bring up Godel every time it comes to automated proving? Godel surely doesn't forbid us to search for proofs of a given statement, or even search for new theorems, if we can (in some automated fashion) determine if they are interesting or not.

      And how do I enter non-ASCII characters? >-(

    • by melikamp (631205) on Thursday November 06, 2008 @06:08PM (#25667407) Homepage Journal

      Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

      Godel proved that you can never have a complete and recursive list of axioms for arithmetic. Regardless, "central" in TFA means well-known, which, IMHO, is fair enough. Letting a computer know proofs of all major results seems to be a necessary step on the way to building a silicon mathematician.

    • Re: (Score:3, Interesting)

      by DamnStupidElf (649844)

      It's no worse than standard mathematics done by humans. There's no indication that humans can present a proof of a theorem that says it has no proof (for instance, can you prove "this statement has no proof of its truth" is true or false or neither?), so humans are no more powerful than a formal system in which one can construct Godel's incompleteness proof.

      Specifically, I'm sure that the "central" theorems they're referring to are ones nearest to set theory or some other basic set of axioms. It's much ha

      • I strongly doubt that this is the case; but it would be philosophically fascinating if computerized mathematical proving were worse than the math done by humans. If it could be demonstrated that building a system that replicates what humans can do is impossible, rather than merely difficult, that would strongly suggest that human consciousness is Something Other.

        I have no reason to believe that that is the case, and I am very strongly inclined to doubt it; but were it so, this might be one of the few plac
      • Consider the computerised proof of the 4-Colour theorem.

        It may be that there is a nice, clean analytical proof out there waiting to be discovered, but every attempt to find it so far has failed.

        What was possible was to reduce the problem to one that was solvable by enumerating a finite set of possible graphs.

        TFA (well, one of them) explains the process quite well.

        I still spend the odd insomniac night trying to figure out a purely analytical proof, though - for no other reason than pure stubbornness and Ludd

    • by l2718 (514756) on Thursday November 06, 2008 @06:16PM (#25667527)
      In fact, Godel proved the exact opposite: that you can make a list of all true statements of mathematics. Godel's completeness theorem states that every statement that follows from the axioms is in fact deducible from the axioms in finitely many logical steps. It is thus possible to enumerate all true statements by enumerating all deductions. Godel also has proved an "incompleteness" theorem. That more famous (and less important) result is that there are statements that are true in specific models yet not provable from the axioms. It implies that there is no algorithm to decide whether a given statement is true -- but this has nothing to do with enumerating all true statements. (Yes, I am a mathematician)
      • Um, no, no, NO! (Score:5, Informative)

        by Estanislao Martínez (203477) on Thursday November 06, 2008 @06:34PM (#25667787) Homepage

        Godel's completeness theorem states that every statement that follows from the axioms is in fact deducible from the axioms in finitely many logical steps.

        That Gödel's Completeness Theorem for first-order logic--predicates, individuals, quantifiers ("for all," "exists"), and truth connectives (not, or, and).

        Godel also has proved an "incompleteness" theorem. That more famous (and less important) result is that there are statements that are true in specific models yet not provable from the axioms.

        The incompleteness theorem is about arithmetic: natural numbers defined in terms of 0 and the successor relation, addition and multiplication. For any set of axioms you pick for arithmetic, there are true statements of arithmetic that cannot be proven from those axioms.

        It implies that there is no algorithm to decide whether a given statement is true -- but this has nothing to do with enumerating all true statements.

        I'm not completely sure of how relevant the incompleteness of arithmetic is for what you're saying, but I am sure of this: first-order logic is complete, but the validity of a statement in first-order logic is undecidable. Therefore, you don't need to bring in Gödel's incompleteness theorem for arithmetic to conclude that in many important cases.

        • For any set of axioms you pick for arithmetic, there are true statements of arithmetic that cannot be proven from those axioms.

          Or false statements that can be proven. Consider the set of all well-formed formulas.

        • The power of GÃdel's work is that he also provides a method (GÃdel numbering) to convert theorems in other domains to integer arithmetic. This is how he screwed with Russel and Whitehead.

          Not that I think an automatic theorem prover is useless, just saying.
        • by khallow (566160)

          The incompleteness theorem is about arithmetic: natural numbers defined in terms of 0 and the successor relation, addition and multiplication. For any set of axioms you pick for arithmetic, there are true statements of arithmetic that cannot be proven from those axioms.

          No, it's not. It's providing a counterexample to a particular claim, that happens to be in the part of math that you term "arithmetic". In particular, there's no reason to expect that this incompleteness property is restricted to axiom systems derived from arithmetic.

      • by melikamp (631205)

        Yes and yes, except that your first sentence is a bit unclear. "That you can make a list of all true statements of mathematics" is just saying that the set is countable.

      • It is thus possible to enumerate all true statements by enumerating all deductions.

        It implies that there is no algorithm to decide whether a given statement is true -- but this has nothing to do with enumerating all true statements.

        I'm confused. Please clarify.

        If it's possible to enumerate all valid proofs I propose the following proving algorithm: Run through all valid proofs; once you get to a proof whose conclusion is the theorem you want to prove, return that proof.

        [if you don't know whether your theorem is true or not, run the above algorithm on its negation as well].

        What's wrong with my algorithm?

        (Yes, I am a mathematician)

        FWIW, At my university, computer scientist are presented with Gödel's theorem before the mathematicians are. Yes, I am a compu

        • Re: (Score:2, Insightful)

          by lexDysic (542023)

          If it's possible to enumerate all valid proofs I propose the following proving algorithm: Run through all valid proofs; once you get to a proof whose conclusion is the theorem you want to prove, return that proof.

          [if you don't know whether your theorem is true or not, run the above algorithm on its negation as well].

          What's wrong with my algorithm?

          The problem is that some propositions P have the following two properties:

          1: P has no proof
          2: (not P) has no proof.

          So your algorithm searches forever and you don't know if it just hasn't found anything yet, or if there is nothing to be found.

        • If it's possible to enumerate all valid proofs I propose the following proving algorithm: Run through all valid proofs; once you get to a proof whose conclusion is the theorem you want to prove, return that proof. [if you don't know whether your theorem is true or not, run the above algorithm on its negation as well].

          That needs a couple of small amendments:

          1. You shouldn't run the algorithm on the statement and its negation in sequence, because the basic algorithm will never terminate for an unprovable stat
    • by sustik (90111) on Thursday November 06, 2008 @07:38PM (#25668773)

      > Godel of course proved that you can never have a complete list of all true statements in mathematics.

      Bzzzt. Wrong.

      What Godel proved was that all mathematical thruths cannot be described by a finite axiom system. The statement that there is a single empty set cannot be proved. Similarly continuum = aleph_1 cannot be proved; in both cases one may argue that these are so "obvious" that they need to be accepted as axioms.

      Note that continuum = aleph_1 was a strongly accepted conjecture as much as the Riemann hyp today.

      I studied set theory and models. It is fascinating that you can have a countable model of set theory! This model is basically built from a countable set with countable many subsets accepted as sets, etc. The rest of the subsets an "outsider" can see are not known inside the model. We denote this model (which is a counatble set for the ousider aka meta theory) by M. The natural numbers are in M, denote it by A. A is known in the model and A is an infinite (countable) set.

      The set containing the subsets of A, denoted by P(A) is different depending on whether you look inside or outside the model! Outside it is an uncountable set, of course. In the model world it is Pm(A) and it appears uncountable again, but for the outside observer it is actually the intersection of P(A) and M. Bear with me another minute!

      In the model world of M the Pm(A) appears not countable. What does that mean? It means that there is no 1-1 mapping between A and Pm(A). In the outside there is a mapping of course, since both sets are countably infinite. This mapping (a function) is actually a set (everything is a set: function, relation etc.) so a mapping denoted by F between A and Pm(A) exists outside. However F is not an an element in M so it does not exist inside the model.

      There is a way to extend the model M into something called M(F) by 'forcing' F to be in M. Basically we add F and everything else that need to be added to still have a valid theory inside M. It is not trivial to do this, it was discovered and proved by P. Cohen an analysis guy (not a set theorist). Set theorists of course run with this ever since. The careful picking of F allows different M(F)-s to be created and that leads to results showing that continuum = aleph_1 is not the only possibility. One can 'force' a bunch of other alephs under the continuum. (What aleph exactly can continuum be is also interesting to set theorists at least.)

      Now there is one model called L which consists of the 'constructible' sets and nothing else. In L continuum=aleph_1. There are in fact theorems based on the assumption that V = L, that is V the world consists of only constructible sets.

      You may wonder how this L can be defined. I cannot go into the details, but one eye-opening fact though is that you can define Lm inside the above M model!!! This Lm is a countable model of L and in some sense a minimal model of set theory if I recall correctly.

      So one may take the position that we want mathematical thruths in L the minimal system. (This resonates with Occam's razor.) Of course note that there is no axiomatic system describing L.

      I want to emphasize that if we fix a model then each statement has a truth value; but there exist truths which cannot be proved working inside the model only.

      For the usual applied math we could say we assume V=L. (Or any other well defined model for that matter.) Note againg that aleph_1 == continuum in L, because there are no other alephs that could fit "between". The mathematicians inside L see this as this: we cannot prove that aleph_1 == continuum in fact there could be something between, but it is not constructible from what we have.

      I hope this helped.

      • by mesterha (110796)

        What Goedel proved is that a sufficiently complex formal system can not be both consistent and complete when using a recursive set of axioms

        In other words, for any useful set of mathematical axioms, there will always be legal statements in the language that are undecided by the axioms. The statements can be assumed either true or false and the system will still be consistent.

        That's the message to take from Goedel's Incompleteness Theorem. Mathematics can never be complete because there is always a new sta

  • by lazynomer (1375283)
    Do you really need supercomputers in all practical cases?
    • by fractic (1178341) on Thursday November 06, 2008 @06:15PM (#25667509)

      No, in general formalizing a proof does not require a lot of computer power. Usually it takes a lot of man power instead. Most current proof assitants are actually proof verifiers. The user has to break down the proof to elementary steps that the program can verify.

      Now sometimes a computer is used to verify a lot of cases by brute force computation. Often this is not done in an actual proof assistant but within a computer algebra package. But this is also met with a sceptical eye.

      The four colour theorem is perhaps one of the most famous formalized theorems. It was orginally proven by a large computation. But noone could really verify this. Recentely (2004) it was formalized completely in the proof assistant Coq (actually a custom extension). This formalized proof did require a lot of computer time allthough just on a normal computer, not a supercomputer.

  • aren't already proven? Principia Mathematica did the basics and I always assumed more advanced theorems became proven as required.

    Maybe someone just wants to do Principia Mathematica volumes II through L but doesn't that already in effect exist?

    • by bcrowell (177657) on Thursday November 06, 2008 @07:17PM (#25668427) Homepage

      Which central theorems aren't already proven? Principia Mathematica did the basics and I always assumed more advanced theorems became proven as required. Maybe someone just wants to do Principia Mathematica volumes II through L but doesn't that already in effect exist?

      Well, one issue is that the Principia Mathematica undoubtedly has errors in it. No human being could ever write a book of that length without making a single mistake. So there could be some value in producing the same results on a computer, which doesn't make the same kind of mistakes a human does. Another issue is that the Principia Mathematica works within a certain axiomatic framework, and I believe that framework is different from the most popular axiomatic framework used today, which is Zermelo-Fraenkel set theory, with the axiom of choice (ZFC). (They were both produced around 1910, but the WP article says that PM used a complicated system of types, which is probably different from anything in ZFC.)

      More broadly, for people who are interested in the foundations of mathematics, there's no clear way to decide that one set of axioms is superior to another. Some people feel that ZFC is too strong, because it asserts the existence of things that can't be explicitly constructed. Those people may be interested in seeing how much of mathematics can be proved using purely constructive methods. With a weaker set of axioms, some results may be impossible to prove, while others may be possible to prove, but the proofs may be extremely long compared to the proofs in ZFC -- so long that using a computer may be a real help.

      Still other people are interested in seeing what can be done in an axiomatic system that's stronger than ZFC. For instance, such a system may have sizes of infinities that are larger than the sizes of the infinities in ZFC, and they may even have creatures like the set of all sets. One thing that tends to happen when you try to make stronger axiomatic systems is that it becomes much more difficult to avoid internal inconsistencies. I can imagine that a computer could be helpful in finding things like that. If you're fiddling around with the computer proof system and it comes up with a result that 1+1=3, then you've learned something interesting: your set of axioms is bogus.

      One thing you really have to be careful about if you're working in this kind of field is that you don't want to inadvertently assume some result that someone else has proved in some other axiomatic system, and that seems obvious to you, but that actually isn't provable (or hasn't yet been formally proved) in the system you're working in. That's the kind of thing where I'd imagine a computer system would be really helpful. It wouldn't allow you to make those assumptions. In general, if you look at almost all published mathematical work, it never states which axiomatic system it's assuming, and that's because in most fields of mathematics we expect that the results are unlikely to depend on which particular foundation you're working in. E.g., I doubt that Wiles' proof Fermat's last theorem even bothers to state that it starts from ZFC, and most mathematicians probably have a strong expectation that it doesn't matter whether you pick some other foundation, Wiles' proof will still be valid. Nevetheless, it's possible that that's not true. Abraham Robinson, for example, claimed that ZFC had been carefully engineered to make the real number system work correctly, and therefore claimed if you wanted to use a different system of numbers (such as the hyperreals, a system that includes Newton- and Leibniz-style infinities), you might be better off using diffrent axioms.

      • Metamath [metamath.org] is doing pretty much this, for ZFC. It's an interesting project. All 193 propositional calculus theorems [metamath.org] from the Principia Mathematica have been proven. >8000 theorems total proven.
  • by Manchot (847225) on Thursday November 06, 2008 @06:34PM (#25667783)
    There's a website called Metamath [metamath.org] which does a lot of what the article is talking about. It starts from the ZFC axioms and works its way up to thousands of elementary theorems, all proved completely formally. Pretty cool, if you're in to that sort of thing.
  • by starseeker (141897) on Thursday November 06, 2008 @07:03PM (#25668227) Homepage

    CAS, or Computer Algebra Systems, represent one of the most useful tools for handling practical application of complex mathematics. The package Feyncalc, built on Mathematica and used for High Energy Physics, is one such example. The problem with using these systems is that they can't be trusted by the people using them. There is always that risk of "did the programmer do something that buggered this case" or "did they get that formula wrong when they translated it to code?" The traditional answer is a combination of by-hand verification and learning to "trust" certain abilities of some systems over time - lots and lots of use creates a certain confidence that the bugs have been shaken out of well-used functionality. But that lingering doubt remains - "is it REALLY right?"

    There are philosophical questions at the root of this issue. On the most abstract level is the question of whether knowing something is true is important compared to knowing WHY it is true - purists might argue if we don't know the latter the problem isn't answered. I'll state up front that for the problem I'm interested in solving - KNOWING my answer to a problem is correct - I'm willing to trust a machine that is proven by humans to be able to verify proofs as correct to verify MY solution to a particular problem. Or, put another way, that once the correctness of the checker is proven all statements of correctness from that checker are also proven. This is not a universal assumption, but if one is willing to make it things get interesting.

    Most proofs mathematicians attempt are not the types of problems posed by high energy physics - proving the solution to a specific integral is the correct solution would be of minimal value as a mathematical revelation. For the practical focus of scientific use of CAS however, the question of whether the system just gave the correct solution to that integral is all important. Moreover, the mathematical axioms behind the statement of correctness are not of immediate concern - they interest mathematicians, but the physicists want to USE that result. There's a contradiction here, because to be trustworthy in a mathematical sense the foundational system within which that integral was evaluated and what assumptions were made in the process MUST be considered. What to do?

    Trustworthy computer verification of proofs offers an answer to this dilemma. A CAS designed to incorporate proof logic awareness into its core system and algorithms could be asked to produce a "proof" of its answer based on the steps it used to create that answer. This proof can then be subjected, just like any human written proof, to the rigors of verification. A human COULD (in theory) do the checking if they wanted to. In practice, an incorporated proof verifier could examine the CAS's proof for flaws. If none were found, for the specific problem in question, the user could then know that the answer they have IS correct, regardless of any potential flaws in the CAS (which will hopefully be reduced dramatically by the design rigors needed to implement routines that can support supplying the proof logic in the first place). The trust tree has been reduced to the proof verification routine and the software and hardware needed to run it, which is a MUCH smaller trust tree than the whole of the CAS.

    The practical realization of such a system is probably decades in the future. It could be done only as an open source system, where the entire mathematical and scientific community contributed to an ever expanding body of trusted knowledge which could be built upon. Many extremely difficult problems would need to be solved - an immediate problem is how to organize mathematics into a coherent framework in a scalable way via computer. Category theory is probably the answer, but what does that mean in terms of system implementation? What about the programming language that must be put behind the mathematical definitions, even if the conceptual framework of Category Theory in Computer is addressed?

    • Hence why programs proving theorems should be proved also.

      Also the program should catch OS inconsistencies an direly alert them to the user of the CAS, as those could create errors also.

  • It is nothing but BS that the computer is better than human. Not to mention profound stupidity to think that the computer is infallible i.e. it is programmed by humans.

    I read the first bit of Freek Wiedijk's article and it's political nonsense. All they've done is use the word "formalization" to describe the process of encoding mathematics "in the computer". What this does is hijack the common notion of formalization for use for there own purposes. Because, formalization is good, right? Well, not if yo

    • by Raenex (947668)

      It is nothing but BS that the computer is better than human.

      It's certainly better at performing repetitive tasks reliably.

      Not to mention profound stupidity to think that the computer is infallible i.e. it is programmed by humans.

      Where was that claimed in the article?

      Because, formalization is good, right? Well, not if you read and really understand what it does, and infinitely more important, does NOT mean in this context.

      What do you think formalization means? How is computer formalization different?

      Point of fact, using computers does NOT take human minds out of the equation. It just hides them giving people the illusion of a more effective system.

      Computer formalization is just a tool, like a calculator. It isn't meant to replace humans, but instead be used by them.

    • by Xest (935314)

      Ask a computer to do something complex correctly once and it'll do it right every time.

      Ask a human to do something complex correctly once and they will regularly screw it up.

      You're right that computers aren't infallible and yes they are programmed by humans but that ignores the thing computers are better at than humans - repetitive tasks. These are often key to proving something. The point is to program something you often only have to get it right once then the computer will get it right as many times as y

  • Thomas Hales starts his essay discussing the impact of software bugs on society, and claiming

    On average, a programmer introduces 1.5 bugs per line while typing.

    That's quite wrong, a better estimate is that, on average, a programmer introduces from 1.5 up to 5 bugs per 100 lines while typing.
    A bug in a paper about bugs... quite fitting.

Never say you know a man until you have divided an inheritance with him.

Working...