## Achieving Mathematical Proofs Via Computers 209 209

eldavojohn writes

*"A special issue of Notices of the American Mathematical Society (AMS) provides four beautiful articles illustrating formal proof by computation. PhysOrg has a simpler article on these assistant mathematical computer programs and states 'One long-term dream is to have formal proofs of all of the central theorems in mathematics. Thomas Hales, one of the authors writing in the Notices, says that such a collection of proofs would be akin to the sequencing of the mathematical genome.' You may recall a similar quest we discussed."*
## godelstheorem? (Score:4, Insightful)

Why is this tagged "godelstheorem"? It's not like incompleteness magically applies only to electronic computers, as opposed to meatbags...

## Re:godelstheorem? (Score:5, Insightful)

I honestly think they need to stop teaching the halting problem to freshmen CS majors. They're just too inexperienced to understand that theory and practice are two different things.. so this whole "limits of computation" thing stifles their enthusiasm.

## Re:godelstheorem? (Score:5, Insightful)

Godel, Escher, Bachby Hofstadter and disagreed withThe Emporer's New Mindby Penrose.One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible, while Hofstadter said that it is possible to successively approximate something intelligent, and we can learn a LOT about ourselves in the attempts, and that in itself is worth it.

At least that is one of the many things I got from the two books.

## Re:godelstheorem? (Score:5, Insightful)

One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible, while Hofstadter said that it is possible to successively approximate something intelligent, and we can learn a LOT about ourselves in the attempts, and that in itself is worth it.

Not only that, but Penrose doesn't offer any actual

proofthat AI is impossible, merely speculative reasoning. Therefore it seems doubly important that we continue our attempts to advance AI. Firstly, that the journey itself is worth the effort, and secondly we really need to find out for ourselves if it is possible.## Re:godelstheorem? (Score:5, Informative)

mustbe possible, with the human brain as evidence? One might argue that the brain is merely a biological computer not fully understood. Unless there is a "higher intelligence" giving humans beings thoughts, then AI must be possible. I suppose this is analogous to a Human "telling" a machine what to think, though, and gives rise to the "who created the creator" argument, but that's getting a little offtopic...This comment is somewhat similar to one below [slashdot.org] for which I apologise, but I wanted to expand on the point slightly

## Re:godelstheorem? (Score:4, Interesting)

One might argue that the brain is merely a biological computer not fully understood.

Or, perhaps, humans could be nondeterministic and therefore not computers at all.

## Re: (Score:2, Interesting)

While computers are nothing more than finite state machines (with an insane amount of states), their theoretical base can be found in Turing Machines and.... those come in a non deterministic flavour [wikipedia.org]. Take note at the "Equivalence with DTMs" sections.

So, if the human brain works non-deterministically, it can be simulated by a deterministic "brain". If this doesn't require infinite amount of states (considering what we know of the universe, this most likely won't be the case), it can thus be simulated agai

## Re: (Score:2)

Realistically I think what we'll see is convergence. We'll start to produce biological computers and we'll understand how to develop our own biological systems as such. Alternatively quantum computing may indeed provide computers powerful enough develop systems capable of acting just like real, complex biological systems.

Then it just comes down to the usual argument of semantics. Have you created artificial intelligence or has artificial intellignece failed because you instead just created real intelligence

## Re: (Score:2)

I would actually welcome if somebody had invented intelligence. I do not care whether it is called artificial or else what but at least we would have something. What we have now instead - well look around for answer.

## Re: (Score:2, Insightful)

Bullshit. The only thing Gödel's incompleteness theorems prove is that no knowledge, not even human knowledge or even a deity's knowledge, can be both consistent and complete. Human beings are so

ridiculously incon## Perhaps you misunderstand what a soul is. (Score:2, Insightful)

It seems to me that there's this revolutionary new religion out there, called Judaism, that has a creation myth that better describes a soul. Then there's this offshoot cult of Judaism, that takes it a step farther... so let me try to explain.

When (in the creation myth) Adam eats the fruit, having been told "the day you eat the apple, you die", he begins to die. That is, his body starts to fall apart. But as his body falls apart, his soul -- tied to his body -- starts to fall apart, too.

So when Adam com

## Re:godelstheorem? (Score:5, Insightful)

Of course you can prove a negative, usually by contradiction. Assume the opposite and see if that produces a contradiction. If it does, then the original negative is true.

## Re:godelstheorem? (Score:5, Insightful)

One of Penrose's conclusions was that any attempt at artificial intelligence is necessarily incomplete, so it won't be possible

But wouldn't Godel's theorem imply that it'd be impossible to build a flawless, all-encompasing intelligence, not necessarily an imperfect one? fsck, even I as a human (allegedly the "superior" intelligence) sometimes feel that my decisions are based solely on the output of a Random() call in my brain rather than logical thought, no reason why a machine has to be different.

AI isn't about trying to build God, it's just about something that can learn new stuff, or at least that's my take of it.

## Re: (Score:2)

AI isn't about trying to build God ...

I hope we reach the singularity in my lifetime. But considering the state of AI at this point in time, I will be long dead before immortality arrives.

## Re:godelstheorem? (Score:5, Insightful)

But wouldn't Godel's theorem imply that it'd be impossible to build a flawless, all-encompasing intelligence, not necessarily an imperfect one?

Gödel's theorem's concludes, simplified a bit, that it's impossible to know the truth, the whole truth and nothing but the truth about sufficiently complex math. In this context, sufficiently complex means "anything that includes addition and multiplication of natural numbers".

An AI may be able to prove that sum(range(1, n+1)) == n*(n+1)//2 for all natural numbers n; that is, it may output a string that's a valid proof. Smart ones can, dumb ones can't. Just like humans. Intelligence is what limits you.

But if a set of axioms and inference rules don't allow for a proof of a given theorem T, no matter how smart you are, even if your silicon brain is smarter than all of mankind, monkeykind and birdkind added up, you can't transcend any limitation that's found wholly outside your intelligence. As in this case, where it's the nature of the mathematics you've made that prevents T from being proven, and not your inability to find the proof.

Similarly, if we agree that no evidence or reasoning can prove or disprove the existence of god, then no AI can know whether god exists: it's not your knowledge or ability to reason that limits you, it's the nature of knowledge and reasoning itself.

Looked upon that way, Gödel's theorem doesn't say anything about AI. It says something about the world.

I don't know what a "perfect" intelligence is. One that can solve all NP problems in O(1) time and space? Or s/NP/Recursive/? Or just something that can solve all recursive problems in finite time? To implement that, all we lack is the ability to store unbounded information in a world of finitely many atoms. Can intelligence do something more than Turing machines? Does that mean the answer is no? Or that we need to connect nerves to the PCI bus?

## Re: (Score:2)

Exactly. Now, my question is, is an AI required to answer such questions? is an AI even required to be able to add two numbers? Godel's theorem as I understand it essentially states that any system (in this case, our AI), when faced with some questions, will either ignore the answer or give an answer that contradicts itself, but is that a problem?

As you say, Godel's theorem says more about the world than it does about intelligences per se, and if not being able to prove or disprove a specific statement mean

## Re: (Score:2)

Nice post. Some points:

[...] To implement that, all we lack is the ability to store unbounded information in a world of finitely many atoms.

Well, even if we had infinitely many atoms, we would still have problems. The Schwarzschild bound [wikipedia.org] limits the amount of energy that can be stored in a volume of space before creating a black hole. So, unless you create a magical way to store information without wasting energy, there's a limit on the amount of information you can store in a given space. Still, this is only a problem if you don't want to wait forever: if you allow unbounded space, the time of computation will be unbounde

## Re: (Score:2)

The Church-Turing thesis states that there isn't any computing device more powerful than a turing machine. It's unproven, however. Also, a google search for "more powerful than a turing machine" brings up a claim that a "real valued neural network" (that is, a neural network where the weights can take on any real-numbered value rather than rational-numb

## Re: (Score:2)

my decisions are based solely on the output of a Random() call in my brain rather than logical thought, no reason why a machine has to be different.

It's not random, it's just that every ten seconds you get a hard disk and feel like doing an integrity check.

No reason why a machine has to be different. [zomfg, I just googled "mecha porn". Rule 34 works, biatches].

## Re: (Score:2)

I heard that the Fields medalist Michael Freedman was trying to prove P!=NP, by reducing the problem to a corollary of Godel's result (the idea being to construct a mapping from NP-problems to "true statements", such that P-problems wind up in the subset of "provable statements").

The notion of using the biggest "negative" result in mathematics to solve one of the most elusive problems is truly inspiring.

I don't know if it'll work though, and I haven't heard much since a few years ago. I've long since given

## The whole discussion is flawed. (Score:3, Insightful)

Penrose, Hofstadter and you all share a basic assumption: that there exists a "real" property that the word "intelligence" denotes. I think that assumption is flawed.

The alternative view is that "intelligence" is just a term in a cultural classificatory scheme. This implies several things:

## Re: (Score:2)

> The derogation of women's and minorities' cognition does continue, however, but expressed in newer terms: instead of saying that women and negroes experience "henids" instead of "thinking in ideas," people today say that women or African-Americans on average have lower IQ than white men, and that IQ is subject to significant inheritance (see Lawrence Summers or The Bell Curve)

Let's be honest, the IQ thing is true. But the differences between men and women are rather small (3-4 points according to wikip

## Re: (Score:2, Interesting)

can human brain be simulated by a Turing machine? It's plausible that human brain obey physics - but is physics Turing computable?

Of course we can create physical machines that's the same as human brains - million of of them everyday:)

## Re: (Score:3, Interesting)

Well this is a decent point. It's impossible to create a truely random number generator in software but you can create one in hardware which, for example, measures the decay of radioactive particals or some such.

But in that case it still doesn't mean AI is impossible, something is still artificial even if some of it's capabilities have to be instituted in hardware rather than software.

## Re: (Score:2)

## Re: (Score:2)

Yeah, quantum mechanical fluctuations in the microtubules.

No one really takes this seriously, not to mention that we

do havecomputers which are affected by quantum mechanics.## Re: (Score:2)

I honestly think they need to stop teaching the halting problem to freshmen CS majors. They're just too inexperienced to understand that theory and practice are two different things.. so this whole "limits of computation" thing stifles their enthusiasm.I suppose. Personally I find it very useful in explaining why, generally speaking, the only way to really know how a program will respond to certain inputs is to test it. Theory and practice are not all that far apart. In theory, the halting problem only s

## Re: (Score:2, Interesting)

Again, restrict the problem space to what is feasible. This pathological example is one of infinite complexity that is impossible in practice (int cannot be substituted for an infinite precision integer). There

willbe overflow or therewillbe a state where n is the same as a state before after a finite number of steps (there are only a finite number of possible values for n). So this program will conditionally halt for different values of n (of which there is a finite number).I was talking about applying

## Re: (Score:2)

You are talking about static code analysis [wikipedia.org] and formal verification [wikipedia.org]. Basically, you can write proofs that a program works and halts, but programming languages and compilers that work with such proofs are very rare outside of research settings, mostly because proving a program is valid is a very long and tedious process. I suspect mainstream compilers will be getting better about checking if programs definitely halt and other easy proofs (like you seem to be discussing) in the future.

Also, there is the field

## Re: (Score:2)

They only use formal verification on small and isolated functional units. Using formal verification to ensure your floating point multiplier is correct is one thing, a possible and highly useful thing, using it to verify that your load/store scheduler is correct is another, extremely not-happening thing.

Formal verification is useful. But the vast, vast majority of AMD and Intel's verification is done via directed tests, directed tests, and then some more directed tests.

## Re: (Score:3, Interesting)

justapply to computers, but it does applytoo!And it is actually a very relevant point. Since Gödel proved that a formal system must be either incomplete

orinconsistent -- and we do not really know which one applies to mathematics -- it may be that a "central" theorem of mathematics will contradict some other finding, later, which turns out to be equally central.Not very likely, but who knows?

## Re: (Score:2, Informative)

Except that we do, and it's incomplete.

Now, as to which applies to meatbags, my guess is they're both incomplete and inconsistent.

## Re: (Score:2)

## Re: (Score:2, Informative)

Sorry, I was wrong.

## Re:godelstheorem? (Score:5, Informative)

We already know that a statement like the continuum hypothesis can be proved neither true nor false from ZFC. We don't need Godel's theorem to prove "incompleteness or inconsistency" when we have lots of practical examples of its incompleteness. So your claim of "don't know" is false.

First order logic is consistent and complete. In fact, Godel proved this. Additionally, consider the set S of all possible propositions of set theory. Somewhere in the power set of S must lie the set of all true propositions. Just take these as your axioms. Voila, a formal system that is both complete and consistent. Your characterization of Godel's theorem is incorrect.

## Re: (Score:3, Insightful)

That is a brief summary but even if I misunderstood that, a proven incompleteness by itself still does not prove consistency. They are not mutually exclusive.

## Re: (Score:2)

Propositions are very simple things. They're just strings of characters. You can easily write a computer program to generate all propositions and there's no problem working with the set of all propositions. Given any set, the set of all subsets exists. This is the Power Set Axiom [wikipedia.org]. This set contains every possible set of propositions, so it certainly contains the set of all true propositions.

An important part of Godel's incompleteness theorems that is often overlooked is that the set of axioms has to be recu

## Re: (Score:2)

My reply was that it applies to computers

too. But of course it is a given that it applies in the same way: it would apply to formal systems that might be concocted by computer, as m## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Correct me if I'm wrong, but doesn't Godel's second theorem explicitly address Peano arithmetic, which is all first order logic?

True, you can't address countability (and by extension the halting problem) completely within first

## Re: (Score:2)

Peano arithmetic does indeed use first order logic. But it adds extra axioms and an axiom scheme (all instances of induction). It's the additional stuff.

Goedel's proof assumes a system of a certain complexity, usually one that can code arithmetic. One cannot code arithmetic in just first-order logic since the arithmetic axioms and scheme are not provable in first-order logic.

Gerry

## Pardon me (Score:2)

On the contrary, they often go hand-in-hand.

## Re: (Score:2)

We already know that a statement like the continuum hypothesis can be proved neither true nor false from ZFC. We don't need Godel's theorem to prove "incompleteness or inconsistency" when we have lots of practical examples of its incompleteness. So your claim of "don't know" is false.

Well said. As another example, the "C" of ZFC (axiom of choice) is independent of ZF.

First order logic is consistent and complete. In fact, Godel proved this.

No, he didn't. This is a common misconception because of the name of his "Completeness Theorem".

The "Completeness Theorem" simply states that whatever you can prove (syntactically) with first order logic within a system is exactly what you would accept (intuitively) as true. This is important because it guarantees that if you apply first order logic within a set of (true) axioms, you can only reach things you would normally

## Re: (Score:2)

What you are calling Completeness is Soundness.

Soundness: If I can prove it, then it is true.

Completeness: If it is true, then it is provable.

Gerry

## Re: (Score:2)

What you are calling Completeness is Soundness.

Not quite. Soundness is the converse of Completeness (in this sense, NOT in the sense of the Incompleteness Theorem):

Completeness (in the sense of the Completeness Theorem): if it follows by reasoning (i.e., it is "true"), it is also provable applying first order logic to the axioms.Soundness: if it is provable using first order logic to the axioms, it also follows by reasoning (i.e., it is "true").It's important to note that First Order Logic is not an axiomatic system per se, it is only a way to derive

## Re: (Score:2)

Since GÃdel proved that a formal system must be either incomplete or inconsistent . . .

There's a third possibility, which is "not powerful enough to be really interesting".

## Re: (Score:2)

That's a strange characterization. Gödel's first theorem demonstrated that any system will have such premises, using logic much like how the diagonal proof demonstrates that the the number of irrational numbers is greater than aleph-null infinity. And, just like computers could sit down and work forever on just rational numbers, computers could sit down

## what is a central theorem? (Score:5, Interesting)

Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

## Re:what is a central theorem? (Score:4, Interesting)

Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?

Just because a list is incomplete doesn't make it any less useful, does it?

... although they are all

I'm certain Wikipedia, any of Google's search results & my list of liqueurs to pick up on the way home are all incomplete

highlyuseful.To answer your question, I would start with all the theorems that are the most employed/referenced by other theorems and work from there. Who knows what might turn up?

## Re: (Score:2)

I'm fairly sure this search [google.com] isn't very useful.

-nB

## Re:what is a central theorem? (Score:5, Interesting)

TFA also says:

When it comes to central, well known results, the proofs are especially well checked and errors are eventually found. Nevertheless the history of mathematics has many stories about false results that went undetected for a long time.

In other words... maybe, just maybe, there's a hole in one of the theorems we use all the time, and an error would have severe ramifications. Or at the very least, there might be an opportunity lurking in an assumption we didn't realize we were making, like the way non-Euclidean geometry was just sitting there waiting to be discovered.

They're not really "limiting" it to central theorems, so they don't need a definition. But the more important (read, broadly-used) a theorem is, the more interesting it will be to have the proofs double-checked by a computer.

That computer proof may itself be wrong if the programmers are wrong, but as with the proofs we already have, it's just one more set of "eyes" looking for problems.

## Re: (Score:2, Interesting)

It doesn't need Godel to figure out that you can't make a list of all possible true statements in mathematics. Given that even a child knows that there are an infinite number of true statements I can only think that you cite Godel in an attempt to give your trivial observation an exaggerated air of authority.

## Re: (Score:2)

You must be trolling. As you well know there are many instances of infinite sets that can be enumerated effectively.

## Re: (Score:2)

Godel of course proved that you can never have a complete list of all true statements in mathematics.

I didn't know that was what he really proved, but in any case, why do we have to bring up Godel every time it comes to automated proving? Godel surely doesn't forbid us to search for proofs of a given statement, or even search for new theorems, if we can (in some automated fashion) determine if they are interesting or not.

And how do I enter non-ASCII characters? >-(

## Re: (Score:2)

Use the HTML entity for Gödel?

## Re:what is a central theorem? (Score:4, Informative)

Godel of course proved that you can never have a complete list of all true statements in mathematics. So if they're just limiting it to "central" theorems, how do they decide what's central and what's not?Godel proved that you can never have a complete

and recursivelist of axioms for arithmetic. Regardless, "central" in TFA means well-known, which, IMHO, is fair enough. Letting a computer know proofs of all major results seems to be a necessary step on the way to building a silicon mathematician.## Re:what is a central theorem? (Score:5, Funny)

on the way to building a silicon mathematician.

I think we have one at our department. He's, like, very stiff.

## Re: (Score:2)

That's why I've always preferred the

siliconemathematician.## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Godel proved that you can never have a complete

and recursivelist of axioms for arithmetic.The Incompleteness Theorem:

What's this about recursive?

## Re: (Score:2)

The "elementary arithmetic" (Peano arithmetic) is recursive.

## Re: (Score:2)

The set of all true statements of arithmetic does exit, but we cannot write a program that would reliably recognize true statements (i.e., always halt with a definite answer). We cannot even have a program that would recognize some set of axioms which we could use to derive all facts of arithmetic.

## Re: (Score:3, Interesting)

It's no worse than standard mathematics done by humans. There's no indication that humans can present a proof of a theorem that says it has no proof (for instance, can you prove "this statement has no proof of its truth" is true or false or neither?), so humans are no more powerful than a formal system in which one can construct Godel's incompleteness proof.

Specifically, I'm sure that the "central" theorems they're referring to are ones nearest to set theory or some other basic set of axioms. It's much ha

## Re: (Score:2)

wereworse than the math done by humans. If it could be demonstrated that building a system that replicates what humans can do is impossible, rather than merely difficult, that would strongly suggest that human consciousness is Something Other.I have no reason to believe that that is the case, and I am very strongly inclined to doubt it; but were it so, this might be one of the few plac

## Re: (Score:2)

Consider the computerised proof of the 4-Colour theorem.

It may be that there is a nice, clean analytical proof out there waiting to be discovered, but every attempt to find it so far has failed.

What was possible was to reduce the problem to one that was solvable by enumerating a finite set of possible graphs.

TFA (well, one of them) explains the process quite well.

I still spend the odd insomniac night trying to figure out a purely analytical proof, though - for no other reason than pure stubbornness and Ludd

## The exact opposite is true (Score:5, Informative)

## Um, no, no, NO! (Score:5, Informative)

That Gödel's Completeness Theorem

for first-order logic--predicates, individuals, quantifiers ("for all," "exists"), and truth connectives (not, or, and).The incompleteness theorem is about

arithmetic: natural numbers defined in terms of 0 and the successor relation, addition and multiplication. For any set of axioms you pick forarithmetic, there are true statements ofarithmeticthat cannot be proven from those axioms.I'm not completely sure of how relevant the incompleteness of arithmetic is for what you're saying, but I am sure of this: first-order logic is complete, but the validity of a statement in first-order logic is undecidable. Therefore, you don't need to bring in Gödel's incompleteness theorem for arithmetic to conclude that in many important cases.

## Re: (Score:2)

For any set of axioms you pick for arithmetic, there are true statements of arithmetic that cannot be proven from those axioms.

Or false statements that can be proven. Consider the set of all well-formed formulas.

## Re: (Score:2)

Not that I think an automatic theorem prover is useless, just saying.

## Re: (Score:2)

The incompleteness theorem is about arithmetic: natural numbers defined in terms of 0 and the successor relation, addition and multiplication. For any set of axioms you pick for arithmetic, there are true statements of arithmetic that cannot be proven from those axioms.

No, it's not. It's providing a counterexample to a particular claim, that happens to be in the part of math that you term "arithmetic". In particular, there's no reason to expect that this incompleteness property is restricted to axiom systems derived from arithmetic.

## Re: (Score:2)

Yes and yes, except that your first sentence is a bit unclear. "That you can make a list of all true statements of mathematics" is just saying that the set is countable.

## Re: (Score:2)

I agree, I was just saying that the original sentence was somewhat ambiguous, but may be it's just me.

## Re: (Score:2)

It is thus possible to enumerate all true statements by enumerating all deductions.

It implies that there is no algorithm to decide whether a given statement is true -- but this has nothing to do with enumerating all true statements.

I'm confused. Please clarify.

If it's possible to enumerate all valid proofs I propose the following proving algorithm: Run through all valid proofs; once you get to a proof whose conclusion is the theorem you want to prove, return that proof.

[if you don't know whether your theorem is true or not, run the above algorithm on its negation as well].

What's wrong with my algorithm?

(Yes, I am a mathematician)

FWIW, At my university, computer scientist are presented with Gödel's theorem before the mathematicians are. Yes, I am a compu

## Re: (Score:2, Insightful)

If it's possible to enumerate all valid proofs I propose the following proving algorithm: Run through all valid proofs; once you get to a proof whose conclusion is the theorem you want to prove, return that proof.

[if you don't know whether your theorem is true or not, run the above algorithm on its negation as well].

What's wrong with my algorithm?

The problem is that some propositions P have the following two properties:

1: P has no proof

2: (not P) has no proof.

So your algorithm searches forever and you don't know if it just hasn't found anything yet, or if there is nothing to be found.

## No, it doesn't work. (Score:2, Insightful)

That needs a couple of small amendments:

## Re:The exact opposite is true (Score:5, Informative)

How would you even define a notion of "follows from the axioms" that didn't presuppose proof in a finite number of steps?GP is talking about a concept known as logical implication. A set of axioms

logically impliesa statement if that statement is true in every model satisfying those axioms. A model is a particular interpretation of a set of axioms. Say, if your only axiom is (For all x, For all y (x+y = y+x)) then an abelian group of order two is a model, and so is Z, and so is any mathematical structure where operation + is, in fact, commutative."Provable" (or "deducible"), on the other hand, means that there is a finite sequence of formulas which constitutes a formal deduction of a statement from the axioms.

As logicians, we work in the meta-theory, which is widely accepted to be ZFC. A model is just a certain kind of set. Giving a rigorous definition here would be excessive, but we can at least mention that models are "big" objects which include the universe of discourse. So a model for natural numbers omega would, naturally, include the infinite set omega. "A logically implies B" is a statement about how A and B relate in various models. Compare that to "B is deducible from A", a statement about existence of a

finiteformal deduction, which, in meta-theory, is a different kind of set. A deduction is literally a record of finitely many formulas over a finite alphabet.## Re:what is a central theorem? (Score:5, Informative)

> Godel of course proved that you can never have a complete list of all true statements in mathematics.

Bzzzt. Wrong.

What Godel proved was that all mathematical thruths cannot be described by a finite axiom system. The statement that there is a single empty set cannot be proved. Similarly continuum = aleph_1 cannot be proved; in both cases one may argue that these are so "obvious" that they need to be accepted as axioms.

Note that continuum = aleph_1 was a strongly accepted conjecture as much as the Riemann hyp today.

I studied set theory and models. It is fascinating that you can have a countable model of set theory! This model is basically built from a countable set with countable many subsets accepted as sets, etc. The rest of the subsets an "outsider" can see are not known inside the model. We denote this model (which is a counatble set for the ousider aka meta theory) by M. The natural numbers are in M, denote it by A. A is known in the model and A is an infinite (countable) set.

The set containing the subsets of A, denoted by P(A) is different depending on whether you look inside or outside the model! Outside it is an uncountable set, of course. In the model world it is Pm(A) and it appears uncountable again, but for the outside observer it is actually the intersection of P(A) and M. Bear with me another minute!

In the model world of M the Pm(A) appears not countable. What does that mean? It means that there is no 1-1 mapping between A and Pm(A). In the outside there is a mapping of course, since both sets are countably infinite. This mapping (a function) is actually a set (everything is a set: function, relation etc.) so a mapping denoted by F between A and Pm(A) exists outside. However F is not an an element in M so it does not exist inside the model.

There is a way to extend the model M into something called M(F) by 'forcing' F to be in M. Basically we add F and everything else that need to be added to still have a valid theory inside M. It is not trivial to do this, it was discovered and proved by P. Cohen an analysis guy (not a set theorist). Set theorists of course run with this ever since. The careful picking of F allows different M(F)-s to be created and that leads to results showing that continuum = aleph_1 is not the only possibility. One can 'force' a bunch of other alephs under the continuum. (What aleph exactly can continuum be is also interesting to set theorists at least.)

Now there is one model called L which consists of the 'constructible' sets and nothing else. In L continuum=aleph_1. There are in fact theorems based on the assumption that V = L, that is V the world consists of only constructible sets.

You may wonder how this L can be defined. I cannot go into the details, but one eye-opening fact though is that you can define Lm inside the above M model!!! This Lm is a countable model of L and in some sense a minimal model of set theory if I recall correctly.

So one may take the position that we want mathematical thruths in L the minimal system. (This resonates with Occam's razor.) Of course note that there is no axiomatic system describing L.

I want to emphasize that if we fix a model then each statement has a truth value; but there exist truths which cannot be proved working inside the model only.

For the usual applied math we could say we assume V=L. (Or any other well defined model for that matter.) Note againg that aleph_1 == continuum in L, because there are no other alephs that could fit "between". The mathematicians inside L see this as this: we cannot prove that aleph_1 == continuum in fact there could be something between, but it is not constructible from what we have.

I hope this helped.

## Re: (Score:2)

What Goedel proved is that a sufficiently complex formal system can not be both consistent and complete when using a recursive set of axioms

In other words, for any useful set of mathematical axioms, there will always be legal statements in the language that are undecided by the axioms. The statements can be assumed either true or false and the system will still be consistent.

That's the message to take from Goedel's Incompleteness Theorem. Mathematics can never be complete because there is always a new sta

## Tagged Supercomputing (Score:2, Interesting)

## Re:Tagged Supercomputing (Score:5, Informative)

No, in general formalizing a proof does not require a lot of computer power. Usually it takes a lot of man power instead. Most current proof assitants are actually proof verifiers. The user has to break down the proof to elementary steps that the program can verify.

Now sometimes a computer is used to verify a lot of cases by brute force computation. Often this is not done in an actual proof assistant but within a computer algebra package. But this is also met with a sceptical eye.

The four colour theorem is perhaps one of the most famous formalized theorems. It was orginally proven by a large computation. But noone could really verify this. Recentely (2004) it was formalized completely in the proof assistant Coq (actually a custom extension). This formalized proof did require a lot of computer time allthough just on a normal computer, not a supercomputer.

## Which central theorems (Score:2)

aren't already proven? Principia Mathematica did the basics and I always assumed more advanced theorems became proven as required.

Maybe someone just wants to do Principia Mathematica volumes II through L but doesn't that already in effect exist?

## Re:Which central theorems (Score:4, Informative)

Which central theorems aren't already proven? Principia Mathematica did the basics and I always assumed more advanced theorems became proven as required. Maybe someone just wants to do Principia Mathematica volumes II through L but doesn't that already in effect exist?Well, one issue is that the Principia Mathematica undoubtedly has errors in it. No human being could ever write a book of that length without making a single mistake. So there could be some value in producing the same results on a computer, which doesn't make the same kind of mistakes a human does. Another issue is that the Principia Mathematica works within a certain axiomatic framework, and I believe that framework is different from the most popular axiomatic framework used today, which is Zermelo-Fraenkel set theory, with the axiom of choice (ZFC). (They were both produced around 1910, but the WP article says that PM used a complicated system of types, which is probably different from anything in ZFC.)

More broadly, for people who are interested in the foundations of mathematics, there's no clear way to decide that one set of axioms is superior to another. Some people feel that ZFC is too strong, because it asserts the existence of things that can't be explicitly constructed. Those people may be interested in seeing how much of mathematics can be proved using purely constructive methods. With a weaker set of axioms, some results may be impossible to prove, while others may be possible to prove, but the proofs may be extremely long compared to the proofs in ZFC -- so long that using a computer may be a real help.

Still other people are interested in seeing what can be done in an axiomatic system that's stronger than ZFC. For instance, such a system may have sizes of infinities that are larger than the sizes of the infinities in ZFC, and they may even have creatures like the set of all sets. One thing that tends to happen when you try to make stronger axiomatic systems is that it becomes much more difficult to avoid internal inconsistencies. I can imagine that a computer could be helpful in finding things like that. If you're fiddling around with the computer proof system and it comes up with a result that 1+1=3, then you've learned something interesting: your set of axioms is bogus.

One thing you really have to be careful about if you're working in this kind of field is that you don't want to inadvertently assume some result that someone else has proved in some other axiomatic system, and that seems obvious to you, but that actually isn't provable (or hasn't yet been formally proved) in the system you're working in. That's the kind of thing where I'd imagine a computer system would be really helpful. It wouldn't allow you to make those assumptions. In general, if you look at almost all published mathematical work, it never states which axiomatic system it's assuming, and that's because in most fields of mathematics we expect that the results are unlikely to depend on which particular foundation you're working in. E.g., I doubt that Wiles' proof Fermat's last theorem even bothers to state that it starts from ZFC, and most mathematicians probably have a strong expectation that it doesn't matter whether you pick some other foundation, Wiles' proof will still be valid. Nevetheless, it's possible that that's not true. Abraham Robinson, for example, claimed that ZFC had been carefully engineered to make the real number system work correctly, and therefore claimed if you wanted to use a different system of numbers (such as the hyperreals, a system that includes Newton- and Leibniz-style infinities), you might be better off using diffrent axioms.

## Re: (Score:2)

## Re: (Score:2)

> Except if it has one of the first Pentium's...

Which doesn't make the same kind of mistakes humans do.

> Seriously, even if the "math prover" is right, who proves the "math prover"?

The point is that if a human and a computer agree it is more likely that they are right than when two humans (or two computers) agree because the weaknesses of the human and the computer differ.

## Metamath Proof Explorer (Score:5, Informative)

## Important for the long term evolution of CAS (Score:5, Interesting)

CAS, or Computer Algebra Systems, represent one of the most useful tools for handling practical application of complex mathematics. The package Feyncalc, built on Mathematica and used for High Energy Physics, is one such example. The problem with using these systems is that they can't be

trustedby the people using them. There is always that risk of "did the programmer do something that buggered this case" or "did they get that formula wrong when they translated it to code?" The traditional answer is a combination of by-hand verification and learning to "trust" certain abilities of some systems over time - lots and lots of use creates a certain confidence that the bugs have been shaken out of well-used functionality. But that lingering doubt remains - "is itREALLYright?"There are philosophical questions at the root of this issue. On the most abstract level is the question of whether knowing something is true is important compared to knowing WHY it is true - purists might argue if we don't know the latter the problem isn't answered. I'll state up front that for the problem I'm interested in solving - KNOWING my answer to a problem is correct - I'm willing to trust a machine that is proven by humans to be able to verify proofs as correct to verify MY solution to a particular problem. Or, put another way, that once the correctness of the checker is proven all statements of correctness from that checker are also proven. This is not a universal assumption, but if one is willing to make it things get interesting.

Most proofs mathematicians attempt are not the types of problems posed by high energy physics - proving the solution to a specific integral is the correct solution would be of minimal value as a mathematical revelation. For the practical focus of scientific use of CAS however, the question of whether the system just gave the correct solution to that integral is all important. Moreover, the mathematical axioms behind the statement of correctness are not of immediate concern - they interest mathematicians, but the physicists want to USE that result. There's a contradiction here, because to be trustworthy in a mathematical sense the foundational system within which that integral was evaluated and what assumptions were made in the process MUST be considered. What to do?

Trustworthy computer verification of proofs offers an answer to this dilemma. A CAS designed to incorporate proof logic awareness into its core system and algorithms could be asked to produce a "proof" of its answer based on the steps it used to create that answer. This proof can then be subjected, just like any human written proof, to the rigors of verification. A human COULD (in theory) do the checking if they wanted to. In practice, an incorporated proof verifier could examine the CAS's proof for flaws. If none were found, for the specific problem in question, the user could then know that the answer they have IS correct, regardless of any potential flaws in the CAS (which will hopefully be reduced dramatically by the design rigors needed to implement routines that can support supplying the proof logic in the first place). The trust tree has been reduced to the proof verification routine and the software and hardware needed to run it, which is a MUCH smaller trust tree than the whole of the CAS.

The practical realization of such a system is probably decades in the future. It could be done only as an open source system, where the entire mathematical and scientific community contributed to an ever expanding body of trusted knowledge which could be built upon. Many extremely difficult problems would need to be solved - an immediate problem is how to organize mathematics into a coherent framework in a scalable way via computer. Category theory is probably the answer, but what does that mean in terms of system implementation? What about the programming language that must be put behind the mathematical definitions, even if the conceptual framework of Category Theory in Computer is addressed?

## Re: (Score:2)

Hence why programs proving theorems should be proved also.

Also the program should catch OS inconsistencies an direly alert them to the user of the CAS, as those could create errors also.

## Re: (Score:2)

And who proves the programs that prove the programs?!?!?!

## BS (Score:2)

It is nothing but BS that the computer is better than human. Not to mention profound stupidity to think that the computer is infallible i.e. it is programmed by humans.

I read the first bit of Freek Wiedijk's article and it's political nonsense. All they've done is use the word "formalization" to describe the process of encoding mathematics "in the computer". What this does is hijack the common notion of formalization for use for there own purposes. Because, formalization is good, right? Well, not if yo

## Re: (Score:2)

It is nothing but BS that the computer is better than human.

It's certainly better at performing repetitive tasks reliably.

Not to mention profound stupidity to think that the computer is infallible i.e. it is programmed by humans.

Where was that claimed in the article?

Because, formalization is good, right? Well, not if you read and really understand what it does, and infinitely more important, does NOT mean in this context.

What do you think formalization means? How is computer formalization different?

Point of fact, using computers does NOT take human minds out of the equation. It just hides them giving people the illusion of a more effective system.

Computer formalization is just a tool, like a calculator. It isn't meant to replace humans, but instead be used by them.

## Re: (Score:2)

Ask a computer to do something complex correctly once and it'll do it right every time.

Ask a human to do something complex correctly once and they will regularly screw it up.

You're right that computers aren't infallible and yes they are programmed by humans but that ignores the thing computers are better at than humans - repetitive tasks. These are often key to proving something. The point is to program something you often only have to get it right once then the computer will get it right as many times as y

## bug in the papers on bugs (Score:2)

On average, a programmer introduces 1.5 bugs per line while typing.

That's quite wrong, a better estimate is that, on average, a programmer introduces from 1.5 up to 5 bugs per

100lines while typing.A bug in a paper about bugs... quite fitting.

## Re: (Score:2, Informative)

## Re: (Score:2)

## Proof is discrete (Score:4, Insightful)

A formal proof is not a numerical calculation. A formal proof is, basically, a set of premises, a conclusion, and a set of steps that justifies the conclusion, given those premises and a set of rules that define your proof system. The premises and conclusions are logical formulas, which are basically symbolic trees, and the proof steps relating the premises to the conclusion are all discrete too. So there is no essential numerical calculation going on at any point here.

## Re: (Score:3, Insightful)

Knowing a little bit about formal proofs and a fair bit about programming, but nothing about the computerized verification or generation of formal proofs, I really doubt the FPU is used for this stuff.

## Re: (Score:2)

Unfortunately his presidency will either be incomplete or inconsistent.

And then there are leaders out there like Castro, who doesn't seem to halt.

## Re: (Score:2)

And that's despite 638 attempts [wikipedia.org] at forced termination.