Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Math Supercomputing Science

Achieving Mathematical Proofs Via Computers 209

eldavojohn writes "A special issue of Notices of the American Mathematical Society (AMS) provides four beautiful articles illustrating formal proof by computation. PhysOrg has a simpler article on these assistant mathematical computer programs and states 'One long-term dream is to have formal proofs of all of the central theorems in mathematics. Thomas Hales, one of the authors writing in the Notices, says that such a collection of proofs would be akin to the sequencing of the mathematical genome.' You may recall a similar quest we discussed."
This discussion has been archived. No new comments can be posted.

Achieving Mathematical Proofs Via Computers

Comments Filter:
  • by Hatta ( 162192 ) on Thursday November 06, 2008 @05:54PM (#25667187) Journal

    Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

  • by eldavojohn ( 898314 ) * <eldavojohn@gma[ ]com ['il.' in gap]> on Thursday November 06, 2008 @05:58PM (#25667227) Journal

    Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

    Just because a list is incomplete doesn't make it any less useful, does it?

    I'm certain Wikipedia, any of Google's search results & my list of liqueurs to pick up on the way home are all incomplete ... although they are all highly useful.

    To answer your question, I would start with all the theorems that are the most employed/referenced by other theorems and work from there. Who knows what might turn up?

  • by jfengel ( 409917 ) on Thursday November 06, 2008 @06:03PM (#25667307) Homepage Journal

    TFA also says:

    When it comes to central, well known results, the proofs are especially well checked and errors are eventually found. Nevertheless the history of mathematics has many stories about false results that went undetected for a long time.

    In other words... maybe, just maybe, there's a hole in one of the theorems we use all the time, and an error would have severe ramifications. Or at the very least, there might be an opportunity lurking in an assumption we didn't realize we were making, like the way non-Euclidean geometry was just sitting there waiting to be discovered.

    They're not really "limiting" it to central theorems, so they don't need a definition. But the more important (read, broadly-used) a theorem is, the more interesting it will be to have the proofs double-checked by a computer.

    That computer proof may itself be wrong if the programmers are wrong, but as with the proofs we already have, it's just one more set of "eyes" looking for problems.

  • by lazynomer ( 1375283 ) on Thursday November 06, 2008 @06:03PM (#25667309)
    Do you really need supercomputers in all practical cases?
  • by exp(pi*sqrt(163)) ( 613870 ) on Thursday November 06, 2008 @06:05PM (#25667347) Journal
    > Godel of course proved that you can never have a complete list of all true statements in mathematics

    It doesn't need Godel to figure out that you can't make a list of all possible true statements in mathematics. Given that even a child knows that there are an infinite number of true statements I can only think that you cite Godel in an attempt to give your trivial observation an exaggerated air of authority.

  • Re:godelstheorem? (Score:3, Interesting)

    by Jane Q. Public ( 1010737 ) on Thursday November 06, 2008 @06:08PM (#25667399)
    Of course this applies. No, Gödel's theorem doesn't just apply to computers, but it does apply too!

    And it is actually a very relevant point. Since Gödel proved that a formal system must be either incomplete or inconsistent -- and we do not really know which one applies to mathematics -- it may be that a "central" theorem of mathematics will contradict some other finding, later, which turns out to be equally central.

    Not very likely, but who knows?
  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Thursday November 06, 2008 @06:11PM (#25667451)

    It's no worse than standard mathematics done by humans. There's no indication that humans can present a proof of a theorem that says it has no proof (for instance, can you prove "this statement has no proof of its truth" is true or false or neither?), so humans are no more powerful than a formal system in which one can construct Godel's incompleteness proof.

    Specifically, I'm sure that the "central" theorems they're referring to are ones nearest to set theory or some other basic set of axioms. It's much harder to prove Fermat's last theorem or the Poincare Conjecture with complete formalism than it is to prove simple theorems of arithmetic. Once the central theorems are proven, it is simpler to tackle more complex theorems using the already proven theorems as lemmas.

  • by starseeker ( 141897 ) on Thursday November 06, 2008 @07:03PM (#25668227) Homepage

    CAS, or Computer Algebra Systems, represent one of the most useful tools for handling practical application of complex mathematics. The package Feyncalc, built on Mathematica and used for High Energy Physics, is one such example. The problem with using these systems is that they can't be trusted by the people using them. There is always that risk of "did the programmer do something that buggered this case" or "did they get that formula wrong when they translated it to code?" The traditional answer is a combination of by-hand verification and learning to "trust" certain abilities of some systems over time - lots and lots of use creates a certain confidence that the bugs have been shaken out of well-used functionality. But that lingering doubt remains - "is it REALLY right?"

    There are philosophical questions at the root of this issue. On the most abstract level is the question of whether knowing something is true is important compared to knowing WHY it is true - purists might argue if we don't know the latter the problem isn't answered. I'll state up front that for the problem I'm interested in solving - KNOWING my answer to a problem is correct - I'm willing to trust a machine that is proven by humans to be able to verify proofs as correct to verify MY solution to a particular problem. Or, put another way, that once the correctness of the checker is proven all statements of correctness from that checker are also proven. This is not a universal assumption, but if one is willing to make it things get interesting.

    Most proofs mathematicians attempt are not the types of problems posed by high energy physics - proving the solution to a specific integral is the correct solution would be of minimal value as a mathematical revelation. For the practical focus of scientific use of CAS however, the question of whether the system just gave the correct solution to that integral is all important. Moreover, the mathematical axioms behind the statement of correctness are not of immediate concern - they interest mathematicians, but the physicists want to USE that result. There's a contradiction here, because to be trustworthy in a mathematical sense the foundational system within which that integral was evaluated and what assumptions were made in the process MUST be considered. What to do?

    Trustworthy computer verification of proofs offers an answer to this dilemma. A CAS designed to incorporate proof logic awareness into its core system and algorithms could be asked to produce a "proof" of its answer based on the steps it used to create that answer. This proof can then be subjected, just like any human written proof, to the rigors of verification. A human COULD (in theory) do the checking if they wanted to. In practice, an incorporated proof verifier could examine the CAS's proof for flaws. If none were found, for the specific problem in question, the user could then know that the answer they have IS correct, regardless of any potential flaws in the CAS (which will hopefully be reduced dramatically by the design rigors needed to implement routines that can support supplying the proof logic in the first place). The trust tree has been reduced to the proof verification routine and the software and hardware needed to run it, which is a MUCH smaller trust tree than the whole of the CAS.

    The practical realization of such a system is probably decades in the future. It could be done only as an open source system, where the entire mathematical and scientific community contributed to an ever expanding body of trusted knowledge which could be built upon. Many extremely difficult problems would need to be solved - an immediate problem is how to organize mathematics into a coherent framework in a scalable way via computer. Category theory is probably the answer, but what does that mean in terms of system implementation? What about the programming language that must be put behind the mathematical definitions, even if the conceptual framework of Category Theory in Computer is addressed?

  • Re:godelstheorem? (Score:2, Interesting)

    by z-j-y ( 1056250 ) on Thursday November 06, 2008 @07:33PM (#25668681)

    can human brain be simulated by a Turing machine? It's plausible that human brain obey physics - but is physics Turing computable?

    Of course we can create physical machines that's the same as human brains - million of of them everyday:)

  • Re:godelstheorem? (Score:1, Interesting)

    by Anonymous Coward on Friday November 07, 2008 @12:00AM (#25671435)

    Let's take a step back and not worry about abstract reasoning for a moment. Here's a concrete example with a very simple function. You really should try to solve this before talking about using "clever checking", etc.

    /* pseudocode: assume "int" is an infinite precision integer with no overflow */
    int collatz_number (int n)
    {
      int steps = 0;
      while (n > 1)
        {
          if ((n % 2) == 1) { n *= 3; n += 1; };
          n /= 2;
          steps += 1;
        }
      return steps;
    }

    (Hint: Nobody knows if it always halts or not. My apologies if you get hooked on this problem too...)

  • Re:godelstheorem? (Score:4, Interesting)

    by Eli Gottlieb ( 917758 ) <[moc.liamg] [ta] [beilttogile]> on Friday November 07, 2008 @12:07AM (#25671503) Homepage Journal

    One might argue that the brain is merely a biological computer not fully understood.

    Or, perhaps, humans could be nondeterministic and therefore not computers at all.

  • Re:godelstheorem? (Score:2, Interesting)

    by InfiniteLoopCounter ( 1355173 ) on Friday November 07, 2008 @12:13AM (#25671555)

    Again, restrict the problem space to what is feasible. This pathological example is one of infinite complexity that is impossible in practice (int cannot be substituted for an infinite precision integer). There will be overflow or there will be a state where n is the same as a state before after a finite number of steps (there are only a finite number of possible values for n). So this program will conditionally halt for different values of n (of which there is a finite number).

    I was talking about applying theory to something that is possible. Obviously you can set up a simple system which is impossible to execute and generate an infinite space of inputs and outputs. This is plain cheating in reality, often the mistaken realm of application of the halting problem.

  • Re:godelstheorem? (Score:1, Interesting)

    by Anonymous Coward on Friday November 07, 2008 @01:00AM (#25671947)

    FYI: The function above can be easily expressed as a 3-symbol, 6-state Turing machine where only 3 of those states do the main algorithm and one of the states is just to catch the halting condition.

    Turing machines that aren't allowed to have infinite tapes are called finite state machines. FSMs are worthlessly boring. Turing machines are interesting because of the possibility of infinite state.

    As for your point about an infinite precision integers: yeah sure, but they can be implemented as strings, and we have TB hard drives now! Have you bothered to consider how large 2^(1,000,000,000,000) is? (Hint: There are only ~2^265 atoms in the known universe.)

  • Re:godelstheorem? (Score:3, Interesting)

    by HungryHobo ( 1314109 ) on Friday November 07, 2008 @05:28AM (#25673107)

    Well this is a decent point. It's impossible to create a truely random number generator in software but you can create one in hardware which, for example, measures the decay of radioactive particals or some such.

    But in that case it still doesn't mean AI is impossible, something is still artificial even if some of it's capabilities have to be instituted in hardware rather than software.

  • Re:godelstheorem? (Score:2, Interesting)

    by Corporate Troll ( 537873 ) on Friday November 07, 2008 @07:35AM (#25673835) Homepage Journal

    While computers are nothing more than finite state machines (with an insane amount of states), their theoretical base can be found in Turing Machines and.... those come in a non deterministic flavour [wikipedia.org]. Take note at the "Equivalence with DTMs" sections.

    So, if the human brain works non-deterministically, it can be simulated by a deterministic "brain". If this doesn't require infinite amount of states (considering what we know of the universe, this most likely won't be the case), it can thus be simulated again with a finite-state machine with an insane amount of states. That is exactly what a computer is, as I said before.

    Throwing around "non-deterministic" won't save you. It might be something else that sets apart the human brain, but it won't be its non-deterministic nature.

"Engineering without management is art." -- Jeff Johnson

Working...