Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

The "Omega Number" & Foundations of Math 247

speck writes "Here's a link to an article in New Scientist about mathemetician Gregory Chaitin, who seems to have thrown some of the basic foundations of math into question with his work on the 'omega number.' Among the more provocative statements in the article: '[Chaitin] has found that the core of mathematics is riddled with holes. [He] has shown that there are an infinite number of mathematical facts but, for the most part, they are unrelated to each other and impossible to tie together with unifying theorems. If mathematicians find any connections between these facts, they do so by luck.' Also of interest is the transcript of a lecture Chaitin gave at CMU, which explains some of the theory in quite accessible language."
This discussion has been archived. No new comments can be posted.

The "Omega Number" & Foundations of Math

Comments Filter:
  • by Anonymous Coward
    Actually, it's weirder than that. Between any two distinct real numbers, there are countably many rationals and uncountably many irrationals.
  • by Anonymous Coward
    I'm having a hard enough time not simply dismissing this as a fluff piece.

    First question, and this is an important and nontrivial one, why does omega exist and are you sure it's a fixed finite number? I have a degree in math and I have no idea where to even begin proving that it exists. It occurs to me that this is a ratios of inifity problem, any halting Turing machine can be made in to a non-halting Turing machine with relative ease so there is at least one non-halting Turing-machine for every halting Turing-machine. I'd assume that the relation is something like this: non-halting >> halting in much the same way that card(R) >> card(Z)

    I propose this isomorph of what he is saying:

    • the numbers comprising Z are infinite, I'll leave the proof as an excercise to the reader, it is true.
    • The numbers comprising R are infinite and R >>> Z.
      in fact, R is so much larger than Z that the ratio between the two cannot be calculated and the limit of card(Z)/card(R) -> 0
    • Since there are infinitely more numbers than those comprising Z then anything we know about Z is known about a remarkably minute set of numbers so it can be extracted that we know nothing about numbers as a whole. ie: We know a lot about Z but that means zip in the grand scheme because Z makes up ~0% of the numbers.

    Or something like that.

    My instinct is that he is trying to make sweeping statements about what we know of math as a whole. That's an easy one, we know next to nothing but what we do know can be proven. In my school we started with 5 axioms and we proved our way through the rest, there are no holes, does that mean we know much about math? Hell no! But it also doesn't mean that math is full of holes or bullshit. Now if he is trying to prove that there is a lot more to know than we can prove then I'll buy it. Is there infinitely more things to know than can be proven? Certainly.

    I'll do the rest for you: what can be proven is countably infinite while what cannot be proven in incountably infinite, thus card(provable)/card(unprovable) -> 0 so the most we can possibly know and prove is ~0% of what there is to know. => We know nothing about math . . . QED

    Only that's a remarkably simplisitic way of seeing things.

  • by Anonymous Coward
    The only thing which could really throw the foundations of mathematics into an uproar would be to show that some hypothesis can be proven to be both true and false

    Not at all! If you found a system of mathematics in which some theorem could be proven both true and false, mathematicians would just say that your system was inconsistent (italicised 'cos I'm using that word in it's maths jargon sense, not it's everyday sense) and be uninterested in it. They'd be uninterested 'cos in a formal system, if any false theorem can be shown to be true, then any theorem can be shown to be true (if 2=1, it logically follows that I'm the Pope).

    The basic foundation of maths that Chaitin's work questions is the belief that mathematics is a solid body of knowledge waiting to be discovered, and that we can "get to" proofs in any part of it from where we are now. His work shows that maths is more like the distribution of matter in space: lots of lumps all over the place, with huge gaps between them and no way to travel from one to the other.

  • Posted by ahbbuddha:

    Are you sure they're not compressible? I'm just thinking of a trivial example like gzip. Every algorythm can be expressed as an implementation of gzip and along with a string of 0's and 1's fed into it that unpacks into the original algorthm. If the two together are smaller than the original algorythm, then it is compressible. This obviously isn't always the case, but is certainly true some of the time.
  • Posted by _LFTL_:

    You could also talk about rings where such as the integers mod 3 where 2+2=1. (the integers mod x can be thought of as the set of all possible remainders when you divide a positive integer by x. For example 7 mod 3 = 1.) So maybe freedom should be the freedom to say 2+2=1 :).
  • Posted by ahbbuddha:

    Hmmm... I agree that stuff over the average isn't always compressible, but once in a while it is. Once is enough to invalidate the authors point that it's not compressible. Now you have me wondering why we are usually interested in the (usually) compressible stuff.
  • by Paul Crowley ( 837 ) on Sunday March 18, 2001 @01:03PM (#355164) Homepage Journal
    If you don't think this is surprising, maybe you haven't fully understood it. He defines a number, W_UTM, which must have some perfectly ordinary value - it's not one of these weird, undetermined things, like the continuum hypothesis or something. It's just the probability that a random Turing machine will halt. Every Turing machine either halts or doesn't halt, so if only you could solve the halting problem you could get a good approximation to W_UTM in a moment. Since you can't, W_UTM is unknowable. But it's *much more unknowable than you might expect*. To quote the lecture:
    So this becomes a specific real number, and let's say I write it out in binary, so I get a sequence of 0's and 1's, it's a very simple-minded definition. Well, it turns out these 0's and 1's have no mathematical structure. They cannot be compressed. To calculate the first N bits of this number in binary requires an N-bit program. To be able to prove what the first N bits of this number are requires N bits of axioms. This is irreducible mathematical information, that's the key idea.
    Dwell on that a little. That is serious weirdness.
    --
  • This number depends on your choice of M, but that's no big deal.

    If they can't even prove that Omega is independent of M (and the TM encoding scheme), then Omega surely doesn't deserve the title "probability that a random TM halts". It's just some non-canonically defined uncomputable but definable real number. There are lots of those.

    Give me a canonically defined "inherently meaningful natural constant" that's uncomputable, and I might be impressed. The criterion for "canonical" is that all sufficiently advanced foreign intelligences will know about the number, just like they know Pi, E etc. They won't know the number's digits, of course, but they will know its definition.

    --

  • Showing that 1.999... == 2 is not a "parlour trick." Indeed, the foundation on which such proofs lie is one of the basis for the set of real numbers. The real numbers allows for irrational numbers such as the square root of 2.

    If you want to study this idea, I would suggest a good undergraduate-level Analysis class. The ideas you would want to pay attention to are least upper bounds, greatest lower bounds and the least upper bound theorem.

    As far as the original article - there are always going to basic, unprovable ideas behind mathematics.

    Actually, if you want to bring up a point with one assumption that math relies on, ask why math must assume that 1 does not equal 0.

    -Hank, mathematics major
  • ...that New Scientist uses coin flips to generate the programs which run their site.

  • A Turing machine can require an arbitrary amount of data to encode. If you encode them as integers, then the numbers W_1000 (probability that a random Turing machine from 1 to 1000 will halt), W_10000, W_100000, etc. are well-defined for that encoding. But how do you know that these probabilities converge?
  • You can compress your number a bit! Call it H (for halting). Consider the following compression scheme: if TM(n) halts after at most 10000n steps then omit digit n. This is computable in compression and expansion, and will reduce the amount of data needed to transmit the first N digits of H quite a lot, even if you have to send the decompressor over first.

    For W_UTM, no such compression exists. If you want to send, N bits of W_UTM to someone, then, including sending the decompressor program, you HAVE to send them at least N bits of data.

  • science Mathematics has things that can not be explained and may never be explained. Even some of the "fundamental truths" in math have no proofs. We are just scratching the surface of what we know in math and maybe someday we will be able to prove it all but that is a long way off.
  • Science

    Any branch or department of systematized knowledge considered as a distinct field of investigation or object of study; as, the science of astronomy, of chemistry, or of mind.

    From That I believe you can mathematics a science

  • The authors work deals with the vast number of statements that are true or un-true, but for which no 'proof' can ever be discovered. They are simply true by random chance.

    This seems to be, in the end, a fairly straight line conclusion from Godel (and actually fits how I try to explain Godel to non-math people).

    Think of it this way: Godel tells me a single proof cannot encompass the system because there are undefined statements. Even if I add them one by one to the system as axioms I never finish. Why, because my 'new' system also suffers from this limitation.

    If you define vast then simply take the system of choice and keep adding axioms...eventually you will have a vast list of unknowable (relative to the first system) statements and meet your explanation.


    Herb

  • Oh, for sure. I quite admire it.

    But it sure does provide great proof of what happens when you let the ignorant moderate the moronic, like Slashdot does: you get outright lies marked up as "insightful."

    Thank god we don't allow true democracy in our nations. The outcome would be horrendous!

    --
  • by FFFish ( 7567 ) on Sunday March 18, 2001 @01:16PM (#355174) Homepage
    This isn't a "Score:3 Interesting"!

    It's a "Score 3: Good Hoax." Ludlum didn't write any books titled "The Omega Number." Geeesh. You've been *had*, fool.

    [Here] is what Ludlum has really written. (Road to Gandalfo is, btw, quite a good hoot!) [ludlumbooks.com]

    --
  • I was fortunate to learn bits of this theory from Chaitin about five years ago, when he was visiting Santa Fe. As folks here have noted, the work has its roots in Godel's Incompleteness Theorem, but there's a huge amount more detail in it. In particular, his work is highly specific and constructive, boiling down some very abstract concepts into specific machinery that is graspable. Definitely worth time if you are interested. I hope that logic courses will use this material as a basis for instruction.
  • Penrose's "proofs" are hotly disputed, to say the least. I personally think they're bunk (and I did study some of this stuff this stuff to at least honours level), but his work is so dense I may well be misinterpreting it.
  • 1) a = b
    2) a^2 = ab
    3) a^2 - b^2 = ab - b^2
    4) (a - b)(a + b) = (a - b)b
    5) a + b = b
    6) 2b = b
    7) 2 = 1

    The problem with this is that to go from step 4 to step 5 you need to divide by zero. If you had a step 4.5 in there, it would look like this:
    (a-b)(a+b) = (a-b)b
    ---------- ------
    (a-b) (a-b)

    a-b = 0

  • Penrose wrote a series of books (3 last I counted) which basically made the same claim: because humans have a special insight into Mathematics which computers provably do not, computers cannot be intelligent and computation is not an appropriate model for a theory of intelligence.

    Well, if human mathematicians are basically wandering around the landscape digging up theorems at random, that sort of blows Penrose out of the water, doesn't it? It would mean that the special "human insight" into Mathematics was essentially a large sequence of random discoveries.

  • that God has retired to a casino and is often heard to exclain, "Wow! I didn't know that would happen!!"
  • Yes, that is true. A better interpretation would be 'This statement has no proof in Principa Mathematica and related formal systems'
  • by PureFiction ( 10256 ) on Sunday March 18, 2001 @01:38PM (#355181)
    There are a lot of posts here about how this is simply a rehash of Godel's theorem.

    This is partly true, but not point. Godel showed that incompleteness exists in any type of formal system capable of self reference. Ala the infamous "This sentence is false" translated into an equivalent in a formal system. The original is rather obscure and reads:

    On Formally Undeciable Propositions in Principa Mathematica and Related Systems

    • "To every w-consistent recursive class 'k' of formulae there correspond recursive class-signs 'r', such that neither v Gen r nor Neg (v Gen r) belongs to Flg(k) (where 'v' is the free variable of 'r')


    This is all well understood and old news at this point. What the author has done is taken Godel's theorem, and the halting problem, and turned them around a different way.

    The essence of what he is trying to say is summarized nicely in this paragraph of the conference log:

    • "So I had a crazy idea. I thought that maybe the problem is larger and Gödel and Turing were just the tip of the iceberg. Maybe things are much worse and what we really have here in pure mathematics is randomness. In other words, maybe sometimes the reason you can't prove something is not because you're stupid or you haven't worked on it long enough, the reason you can't prove something is because there's nothing there! Sometimes the reason you can't solve a mathematical problem isn't because you're not smart enough, or you're not determined enough, it's because there is no solution because maybe the mathematical question has no structure, maybe the answer has no pattern, maybe there is no order or structure that you can try to understand in the world of pure mathematics. Maybe sometimes the reason that you don't see a pattern or structure is because there is no pattern or structure!


    He then describes how randomness would indicate an irreducible statement of truth that could not be compressed by finding a 'proof' that proves this truth. The idea being that a 'proof' is a program or function that generates truths, or verifies the truth of a statement.

    Again, this is not groundbreaking, Godel proved essentialy the same thing with his proof. The turing halting problem is another variation, but this is where it gets interesting.

    The author takes the halting problem and instead of determining whether the program halts or not, determines the probability of the program halting given a random program produced by flipping coins.

    The equation to solve this is straightforward, the the 'proof' which is used to determine whether the program halts or not is the computer itself, and the statement is a program produced by random bits from a coin toss. Each bit determined by an individual coin toss.

    What you then end up with is a statement that is well defined in number theory terms, but maximally unknowable. Every sample program produced from the random coin toss is a straight forward sequence of 1s or 0s, but the statement as a whole is irreducible.

    Again, this seems rather unrelated, until you consider proofs as the computers which calculate the truth or non truth of a given statement.

    It then becomes obvious that the truth or non truth of the statement requires a proof that can reduce the statements into a true or non true result. And there are a huge number of situations where such a proof can not exist.

    So, godel's theorem deals with incompleteness in a formal system where a single proof cannot encompass the entire set of true and and un-true statements.

    The authors work deals with the vast number of statements that are true or un-true, but for which no 'proof' can ever be discovered. They are simply true by random chance.

    Which holds a lot of interest for physicists because they have been dealing with truths that are random and true for no provable reason for decades...

  • But in _any_ mod, 2+2 = 4 is a true statement. It's just that in mod 3, 2+2=1 is also a true statement, since of course 1=4 mod 3.
  • Goedel's proof just states that it is impossible
    for a symbolic logical system ot be 100% consistent.
    But what about 99.99999%?
    These other guys try to quantify how good or bad
    a system can be.

  • (don't have the book it's in here in florida right now)

    Can any Floridians confirm this?


    Rich

    ------
    "Could you, would you, with a goat?"

  • However, as mathematics advances, do we know if there are theorems which can never be proven?

    It depends what you mean by what's a proof and what isn't a proof. If you fix a particular proof system like ZFC, then yes, there are theorems which can never be proven. The Continuum Hypothesis (that there is no cardinality between the size of the natural numbers and the size of the real numbers) cannot be proven or disproven within ZFC.

    Other proof systems do enjoy completeness; any theorem that can be expressed within that system can be proven or disproven. But these systems are generally too simple to be of any use.

    If you don't fix a particular proof system, then things are fuzzier. Anything is provable, if you pick the right proof system. (To prove formula F, create a proof system with F as its only axiom.) It all comes down to what you choose as your axioms and your underlying logic. Mathematics will always have this issue at its foundations--mathematicians simply decide what they think the axioms should be. Whenever a mathematician proves a theorem, what they're really doing is saying, "If you believe this, this, and this, then you should also believe my theorem because..."

    For example, when you learn the Mean Value Theorem, what you're really learning is that if you assume certain axioms, then the Mean Value Theorem is true. (In particular, if the axioms of Zermelo-Fraenkel set theory are true, then the Mean Value Theorem is also true. There are much simpler systems that can also prove the Mean Value Theorem.) Most mathematicians couldn't even tell you what the axioms of ZFC are, though, and this isn't as bad as it sounds. Foundational issues, like what axioms sit at the very bottom, don't have as much effect on the bulk of mathematical practice as people might think. When Russel produced his famous paradox, a few mathematicians scrambled to come up with a new foundation, and a few new systems were proposed, but 2+2=4 was not at much risk, and neither was the Mean Value Theorem. This is because the foundations are not chosen arbitrarily--they are chosen to capture, as simply as possible, what mathematicians see as mathematical reality. This has never changed overnight, but it has slowly evolved over time (and there's no consensus). Perhaps mathematicians of the future will take it as intuitively obvious that the continuum hypothesis should hold, and it will become common to assume that it's true.

  • On a related note, can quantum computers solve NP complete problems in P time?

    I believe this is still an open question. If it's been resolved, then whoever solved it did so recently or didn't do a good job of getting the word out. I think the general belief, though, is that it's unlikely that quantum computers can do NP-complete problems in polynomial time. (Although, as you probably know, Peter Shor demonstrated an algorithm to factor numbers in polynomial time on a quantum computer.)

    As for the nearby comment that quantum computers challenge the Church-Turing hypothesis: don't count on it. Turing machines can simulate quantum computers, albeit with an exponential slowdown. In other words, when it comes to computability, quantum computers can't do anything that Turing machines can't--they just do some things faster.

  • According to kolmogorov complexity, random means that it can't be described in a shorter way. But it is also 'described' by the turing machine that is analysed and the turing machine computing the number, which need not be of infinite length.

    The difference here is in what is allowed as a description. The Omega number for a particular universal self-delimiting Turing machine M has a short description: it's the sum of (1/2)^length(x) over all inputs x on which M halts. But this description is not particularly explicit (specifically, the formula that tells you whether a particular digit is 1 has an existential quantifier on it, so truth values can't be computed effectively). Kolmogorov complexity involves effective descriptions: codes for programs that tell you what some initial segment of Omega is. So there's no inconstency here.

    Omega's big property is that as N grows, the length of the shortest program needed to output the first N digits of Omega is roughly N. In this sense, Omega is incompressible. This is (very loosely) accomplished by hashing together 2^n bits of the halting problem to get the nth digit of Omega--so even if the halting problem (encoded as a sequence of 0's and 1's) is compressible, any statistical bias or weak pattern gets wiped out when you mash the halting problem down like this.

    I have to say, though, that I don't agree with the article's gloomy assessment of mathematics. "He shattered mathematics with a single number"? It makes a dramatic magazine article, but it doesn't ring true. (You should be warned that I'm an evil logic student, though!)

  • You're right, but the Church-Turing hypothesis doesn't make any claim about runtime, and the definition of Turing machine does not include the restriction that the machine must stop in polynomial time--in that case, the Church-Turing hypothesis definitely fails. When it comes to computability, you can't do anything with a nondeterministic Turing machine that you can't do with an ordinary Turing machine. Perhaps you can do it faster, but what's computable doesn't change.

    P is the class of sets that are computable by a Turing machine that only runs for a polynomial length of time, and NP is the class of sets that are computable by a nondeterministic TM running in polynomial time. A set X is NP-complete if it's in NP and any other set in NP can be reduced to it in polynomial time (by a deterministic Turing machine). Both P and NP, though, are proper subsets of the class of computable sets--those sets for which there's a Turing machine (that can use as much space and time as it wants) that will, given any string, eventually tell you if that string is in the set or not.

    The Church-Turing hypothesis just says that Turing machines capture the intuitive notion of "computability". (Of course, the rigorous definition of computability is in terms of Turing machines, so the Church-Turing hypothesis really says that Turing machines are the "right" definition of computability. Fundamentally, it's subjective, although not that subjective: any reasonable notion of algorithm that has ever been produced has been reduced to Turing machines.) It doesn't say that Turing machines are necessarily efficient.

  • by Lamesword ( 14857 ) on Sunday March 18, 2001 @05:32PM (#355189)
    The definition is oversimplified in the lecture transcript. (I couldn't read the article.)

    What you do is you fix a self-delimiting universal Turing machine M. This is a machine that takes its input, interprets it as another Turing machine, and simulates that other machine. Self-delimiting here essentially means that if it interprets "100011101" (or some other string) as a program, then it won't interpret any extension of that string as a program. In particular, if M halts on input "10001101", it won't halt on any extension of that string.

    Define Omega_M (the halting probability of M) to be the sum of (1/2)^(length(x)) over all inputs x on which M converges. Because M was self-delimiting, this series will converge to some number between 0 and 1. (You can prove by induction on n that the sum restricted to x of length <=n is bounded by 1.)

    This number depends on your choice of M, but that's no big deal.

    So, to address your question a little more directly, we're calculating this probability by averaging over infinitely many Turing machines (as inputs to our universal Turing machine), and we're doing this by weighting the Turing machines with short codes more heavily--Turing machines of length n get weight (1/2)^n, and the self-delimiting nature of our universal TM makes the sum of these weights converge.

  • To moderators who moderated as insightful:

    This is a joke,
    it refers to the (very funny, imho) "Hitchhiker's guide to the galaxy" books,
    and the earlier story about the gunzipped DeCSS prime number.



    ---
  • I think that the randomness in this proof can be seen as a sampling device, allowing one to avoid having to examine countably many "programs". In that case any sufficiently good pseudo-random sequence would work as well, with perhaps a much larger sample size.

    So the non-existence of true randomness would have no effect on the result.

    OTOH, if you wanted to argue that the concepts of infinite, including countable, were invalid then I'd say you might have a point. But then I've always been in favor of limiting the integers to, say, the power set of possible energy states of the universe. Some other maximal integer may be more appropriate. The question then would be, "Do you get an overflow error, or does it wrap around?"


    Caution: Now approaching the (technological) singularity.
  • 2 + 2 != 5 ?

    Ever count clouds?
    2 + 2 == 4 by definition. If it isn't true, we don't consider it integer arithmetic.

    So there's no chance of it being thrown out.


    Caution: Now approaching the (technological) singularity.
  • So this seems to me as an inconsistency in his theory, Since I'm not at all a genius mathematician that must point to something that I misunderstood. The omega number for a non-halting turing machine(for which it can't be proved that it halts) is a pure random sequence of bits, and of infinite length (right?). According to kolmogorov complexity, random means that it can't be described in a shorter way. But it is also 'described' by the turing machine that is analysed and the turing machine computing the number, which need not be of infinite length. My question: How do these theories combine? or where is my logic error?
  • So this becomes a specific real number, and let's say I write it out in binary, ...

    And the truly amazing thing is that this binary number is the gzipped DeCSS code.

  • That list left out some of my favorites! What about:
    The Montessori Method
    The Fibonacci Sequence
    The Zimmerman Telegram
    The Ostend Manifesto

    (with an assist from an old LA Times Sunday crossword)
    --

  • It was well done, though.

    --

  • But it was oddly prophetic

    Of what?

    --

  • You need to add headers to this, which makes all strings somewhat larger.

  • The implications of the Goedel theorem are well known since a long time, yet there is an intriguing thing about the work around the Omega number: to me it is the first time that someone studies the properties of the origin of uncertainty in a such practical manner. Actually, Goedel and Turing works remained a way to say what could not be done and used marginally to state that something is impossible, but I never saw a formula that tells about non-knowledge.
    Perhaps you will be able to write formulas that resemble the formulas of certainty (anything we currently write down, folks!) but the presence of the omega number will turn them into the exact opposite. Odd, isn't it?
  • Disclaimers:
    - New Scientist site is *still* /.'ed, so I haven't been able to read the article
    - Not really a "math person"

    What do you mean by mathematical truths? If you mean theorems, aren't the theorems provable as consequences of particular assumed axioms? If you mean the axioms themselves, well, then mathematics isn't an especially special case is it? Isn't any system going to have to have certain fundamental axioms you take as true, that aren't proved?

    I'm not sure I follow how you're tying your first philosphical point about the seeming discrepancy between mathematics and the physical sciences (or what Quine calls the double standard in ontology) with Chaitin's findings.
  • Just make sure your Turing Machine is running Windows 95.

    Prob(halting) = 1.0

    It's very easy to encode!
  • I understand that Chaitin's work builds on Gödel's work in interesting ways -- so it's not just a "restatement" of Gödel's work -- but you're right that it hasn't "thrown some of the basic foundations of math into question."

    (For those who don't know, Gödel proved that there are some mathematical hypotheses that can't be proven true or false. That was very surprising, but it didn't call into question any theorem which has been proven true. The only thing which could really throw the foundations of mathematics into an uproar would be to show that some hypothesis can be proven to be both true and false.)

  • Obviously anyone could invent a formal system which generates inconsistencies, and if you did, no one would care. But if one of the formal systems in common use -- say, algebra or analysis -- were found to generate inconsistencies, I think "uproar" would not be an inappropriate term.
  • Rules of thumb, stereotypes, crude aproximations. Simplifying to cope with reality. The air temperature versus the exact momentum of each air molecule.
  • The wierd part is that between any two irrational numbers, there is a rational number. And there are strictly more irrational numbers than rational numbers.
  • If I recall correctly, Aleph-Zero is the same as the omega number...

    Different omega. And in Cantor's transfinite cardinality theory aleph-null is not the same as omega; aleph-null is a cardinal, omega is an ordinal.

    Cheers,
    Greg

  • Goedel proved back in the 30's that there were many things (an infinite number?) which were true but for which proofs cannot be provided. OTOH Chaitin is a well known mathemetician (in some circles, anyway). Presumably he has something interesting to say, but I doubt it's as revolutionary as the post makes it sound.

    Oh, I dunno. I'd say that it's the equivalent of Quantum Mechanics, but for math.

    Think about it... the Omega number puts a limit on the accuracy to which we can know mathematical theorems... Maybe it's the equivalent of the Heisenberg Uncertainty Principal?

    ... well, maybe. :)

    That might actually be a good way to use the Omega number... build it in, turn everything to probabilities. *sigh*

    Simon
  • I wrote:
    "Think about it... the Omega number puts a limit on the accuracy to which we can know mathematical theorems... Maybe it's the equivalent of the Heisenberg Uncertainty Principal?"


    The Heisenberg Uncertainty Principal is a mathematical theorem.

    Uhh... which is why I said maybe it's the equivalent of the Heisenberg Uncertainty Principle. Certainly, the HUP is a mathematical theorem, but it basically states that for a given system, with orthogonal states in Hilbert space, you can't get absolute information about both of those states. In terms of quantum mechanics (which is the only segment of physics where the uncertainty principle is known to hold), the hilbert space is made up of momentum and position.

    It's a physical theorem. What I'm talking about is a wider application of this to cover general maths

    Simon
  • Actually, there are more irrational numbers than rational ones. And, moreover, between any two rational numbers there exists at least one irrational number.

    The irrational numbers that you mention are drops in the bucket of irrational numbers.
  • >Does the lack of compressability derive from its unknowability?

    IANAM, but I think they may be two aspects of the same thing -- the thing can be known as itself alone and not in any kind of short hand form. Each bit needs its own algorithm to derive.

    It isn't about feeding a random file into gzip and sometimes (usually) getting a larger file out.

    It's about feeding a specific number into a ANY compression algorithm you could create and ALWAYS getting a larger file out.

  • I think this is more of a thinking-through-the-implications kind of thing. We know in principle that systems are incomplete and that adding new facts either make them inconsistent or fail to complete them, but that's only because we eventually must swallow the poison pill of Godel's theorem. The question is, just how incomplete are they?

    Maybe for any number N we could devise a system that generates the majority of true statements. Perhaps a mathematician could correct me, but it seems like omega is somehow linked with knowing what fraction of statements can be shown to be true in a finite number of steps. If this is the case, then we can't really know the bounds of our own ignorance.

  • Exactly so!

    Math is a bunch of islands (number theory, topology, ...) of theory, that arn't necessarily related.

    In fact, usually when someone such as Andrew Weil DOES build a bridge between a couple of these islands then we herald that as a rare triumph!

    Also, as others have stated, it's a bit late to be worried about Godel's incompleteness theorem... kind of old news!
  • Real computers don't just perform finite computations, doing one or a few things, and then halt. ... "Many computer applications are designed to produce an infinite amount of output," Becher says. Examples include ... operating systems such as Windows 2000.

    Oh, come now. I think we all know the answer to the halting problem for Windows.

  • "You [have] to specify a set by listing its members, not just by stating a property that all its members had, so "the set of all sets which do not contain themselves" became an invalid definition of a set). Small inconsistency, none dead."

    So, the integers aren't a set? I guess the integers are countable, but it is still impossible to "list" all of them. The reals aren't even countable...

    While yes, you can get around Russell's paradox that way, you've weakened set theory to the point where it can't do everything that old set-theory can do (in fact, it can hardly do anything).

    In *real* mathematics, you *can't* get rid of this inconsistency. That's what Godel proved in his Incompleteness Theorem.
  • "I am sure there is a way to calculate the probability that a Turing machine will halt on random input, if that is what this problem is really about."

    This is an uninformed statement, and an irrelavant one. It is uninformed because the Halting Problem says that it's impossible to know whether a TM will halt for a given input. Since probability is the number of "sucesses" over the total number of trialsm, it can't be computed unless sucesses can be found. It's also irrelavant because Chaitin was talking about random TMs, not random inputs to a given TM.

    "The solution might be extremely complex, but I doubt it is impossible. "

    I believe Chaitin *proved* it was impossible. You can't just doubt that - you would have to show that his proof is bad. But given that you can't even be bothered to read the article, I don't know how you would manage that.
  • Every Turing machine either halts or doesn't halt, so if only you could solve the halting problem you could get a good approximation to W_UTM in a moment. Since you can't, W_UTM is unknowable.
    In other words, W_UTM is non-computable? So what? There are lots of non-computable numbers. For example, take the irrational number in which each binary digit n is equal to 1 if TM(n) halts, 0 if it doesn't. An algorithm that "compressed" that number would also solve the halting problem; therefore, it does not exist.

    So, explain to me again why this particular non-computable number is special?
  • If the two together are smaller than the original algorythm, then it is compressible. This obviously isn't always the case, but is certainly true some of the time.
    A funny thing occurred to me about compression, just last night... If you fed random files to a compression program like gzip, the vast majority of the resulting archive files would be larger than the originals. See, assuming that gzip operates deterministically, a particular archive file will always generate the same output when decompressed. That means that there must be at least one unique compressed file for every unique uncompressed file. That means that over a sufficiently large set of files, the average compression ratio achieved by any lossless compression algorithm can never be better than 1:1. (Hmmm... I actually thought I'd proved that it must be worse than 1:1, but I can't remember how I did that.)

    Anyway, fortunately for us, the files that gzip is able to shrink successfully tend to be the files that we're interested in. However, when you're talking about exotic pseudo-random numbers, that is not the case.
  • > THERE ARE ALTERNATIVE MATHEMATICAL PHILOSOPHIES!

    Conveniently priced at $146.50 [barnesandnoble.com] ... for suckers. "Logic not necessary! Give me $150.00 and I'll show you why!"

    - - - - -
  • You could always use HTML character entities: For example, omega would be entered as &omega;, which appears as .

    This works in any browser compliant with (I think) HTML since version 3, and XHTML 1.0. It at least works in Mozilla 0.8; I can't speak as to other browsers.
  • Not quite, quantum theory in its simplest form is built on the idea that no matter how good our instruments, the more accurately we read the exact circumstances of an event, or small system, the more we alter it.

    It's not just our measurements. The Heisenberg Uncertainty Priciple (which is what you're talking about) is a statement about the universe, not our ability to measure it. More and more, it is apparent that this uncertainty is not because our instruments are too crude, but that this uncertainty is inherent in nature. You can not measure well defined velocities and positions of particles on the quantum scale because they do not have well defined positions and velocities.

    Einstein had a hard time with that notion--that we live in a non-deterministic world, that chance is built into the universe--but that is the case.

    The real power of quantum physics is that it gives us the means of treating these systems statistically without having to look at them on an individual basis.

    Systems of particles is really thermodynamics. In quantum mechanics, you can (and do) pay attention to a single particle.

  • Other than the fact that the intelligent content around here approaches zero at times.
  • Is used to denote the cardinality of the set of real numbers. One of my favorite theorems in maths is 2^N = omega, where N is the cardinality of the natural numbers.
  • I believe the posters point was the the members of the set of _significant_ irrational numbers (i.e. those that occur in "fundamental" mathematical proofs) are mostly on the order of magnitude of 1. But this itself might just be one of those random, proof-averse facts that this theorem theorizes about itself. Enough to give me a headache in any case.
  • by Fnkmaster ( 89084 ) on Sunday March 18, 2001 @04:17PM (#355245)
    I believe that's a big part of the point of the theorem:

    You can get it in the limit from below, but it converges very, very slowly --- you can never know how close you are --- there is no computable regulator of convergence, there is no way to decide how far out to go to get the first N bits of W right.

    So it looks like it appears to converge, but you can't really know whether it's converging or not. :) Or something along those lines.
  • Chaitin has quite a lot of stuff in his homepage: http://www.cs.umaine.edu/~chaitin/ [umaine.edu]

    Some entire book texts there, etc.

    Quite difficult stuff, even for a CS major. Having at least familiarity with automatas and formal languages is recommended, although still far too little.

    There's some quite weird stuff in some of his books. I can't say I would recommend reading his stuff without healthy sceptic attitude...

  • He put a random number generator in the definition of the thing, so is it really surprising that the output is random?


    I'd say yes. Take the flip of a coin, a very basic random number generator. You know what the probability is of it being heads if it is a fair coin. It's 0.5.

    He's saying that he can't find the actual probability that any given program will halt or not. He's saying that probability is 'maximally unknowable'.
  • by molo ( 94384 ) on Sunday March 18, 2001 @01:29PM (#355253) Journal
    I don't have a big formal math background, but I think i was able to pick up what he says in the lecture transcript.

    The interesting point of the matter deals with Turing machines and the halting problem. If you have a well defined turing machine, it will either halt or not depending upon its input (the program). Turing's idea was that you can't determine beforehand whether a given program will halt (for all possible programs). That is, the only guranteed way to see if a program halts or not is to run it. If it halts in the time you observe it, good. If not, then will it halt in n+1 time? Unknown.

    Chaitin defines W as "the probability that a program generated by tossing a coin halts." And he says that this W will be a real number between 0 and 1 that is well-defined. He says once you define the language of the turing machine, W becomes well defined. He then claims that W is 'maximally unknowable' - that is, it is irrational like PI and e, having no mathematical structure. But it is not just irrational, he says that you can't generate W like e or PI from a formula.

    You can get it in the limit from below, but it converges very, very slowly --- you can never know how close you are --- there is no computable regulator of convergence, there is no way to decide how far out to go to get the first N bits of W right.

    He also claims that W is 'irreducible information' - it cannot be compressed because it is truly random.

    From here it gets pretty complex, but my understanding of it is that this introduces true randomness into pure mathematics, which people think shakes things up quite a bit. He compares it to the introduction of quamtum mechanics into Physics.
  • I don't see why you can't use Monte Carlo methods to estimate Omega. Generate sufficient number of random inputs, assume that a program halts if and only if it halts by some time T, and you can get an increasingly good approximation to omega by increasing T and increasing the number of random inputs.
  • I happened to sit in on the lecture at CMU. Certainly Chaitin's results do owe a lot to Godel and Turing, but it's not just a rehash.

    Here's what's interesting to me:

    First, it's mysterious that mathematical truths are applicable to the "real world". This is a philosophical question that people have struggled over for a long time. Why is it that abstract mathematical structures discovered without any reference to physics often later turn out to be useful in physical theories?

    Now consider what Chaitin is saying. Very few mathematical truths have any structure at all. That means that we can't prove the vast majority of true theorems, and if you were to pick a mathematical truth out of the air it is unlikely in the extreme that you could find a proof for it.

    Put these two facts together. Isn't it awfully surprising that mathematics is so successful at describing the real world? Math is full of unproveable truths, and yet, we seem to be able to prove a bunch of really useful things.

    Now why should that be? I don't know.

    If you're an optismist, you might say, how lucky! It's a good thing that the universe is structured in a way that's mostly congruent with the proveable sections of mathematics.

    If you're a pessimist, you might wonder how long our luck is going to last.

  • Sounds like another good reason to be a constructivisit. In Constructive mathematics, numbers are defined in terms of a total function which computes them (or, for a "real" number, a function which can get you arbitrarily close to them). None of this "let n = 1 if the continuum hypothesis is true, 0 otherwise" stuff! Constructive mathematics is pretty nice, though some "obvious" stuff is not provable.

    Here's some links:

    http://plato.stanford.edu/entries/mathematics-cons tructive/ [stanford.edu]

    http://www.cs.cmu.edu/~fp/courses/logic/ [cmu.edu]

    Of course, some classicists find delight in how insanely undecidable their mathematics is, and that's fine, too. =)

    \Omega_{UTM} is a pretty cool idea, though, much worse than the standard trick of defining which has decimal digit n = 1 if turing machine n halts, 0 otherwise (also undecidable, but not as hopeless as his!). I wish I hadn't missed the lecture.

  • Reading about eternal mathematical problems reminds me of a joke I once heard.

    Two baloonists (the baloon with a basket) are lost and decide to land and ask for directions. They find an elderly gentleman strolling through the countryside and land the baloon next to him.
    "Excuse me, sir, can you tell us where we are?"
    The gentleman crossed his arms, scrathed his head, rested his chin in his palm and then gently said - "Why, you're in a baloon." and walked off.
    The two baloonists just shrugged their shoulders and took off again. Once high up in the air, one of the baloonists turned to his friend and said:
    "You know, I think that the elderly gentleman was a Mathematician."
    "Really, how can you tell?"
    "First, he was intelligent because he thought long before answering. Second, his answer was correct. Lastly, his answer hasn't helped us at all."
  • I used to do work with constructive mathematics, using the Boyer-Moore Theorem Prover [utexas.edu]. This is one of the better tools from the early days of AI. Everything is done using recursive functions which must provably terminate. The way you prove that something terminates is to define some integer function which is always positive but becomes smaller on each recursion. If you can do that, the recursion has to terminate. (This works for loops, too, and is part of the basis of program proof.) Discrite mathematics is built up by machine-proving several hundred theorems in order given a very small set of axioms akin to the Peano postulates. This is a very appealing approach to a programmer. There are no universal quantifiers; anything you can quantify over, you can iterate over. This cuts through some of the philosophical problems with more classic approaches. The price you pay is rather clunky proofs by traditional standards; everything has a few cases in it.

    On a related note, the halting problem is formally undecidable only for machines with infinite memory. With finite memory, eventually you either halt or repeat a previous state.

  • by ozbird ( 127571 ) on Sunday March 18, 2001 @12:58PM (#355296)
    'If mathematicians find any connections between these facts, they do so by luck.'

    And if Slashdot posts a connection to these facts, the mathematicians website is out of luck.
  • What you have just touched on is a philosophical issue. You for some reason believe in the Platonic idealism, that mathematical concepts exist independently of the mind. Without the existance of humans, you believe that mathematics still exists.

    However, this belief has never been proven. It is nothing more than a belief, just like many people agree that their exists a God. Therefore, just as the belief in the existance of a God turns something into a religion; the belief in the Platonic idealism turns mathematics into a religion - rife with all of the problems associated with religions!

    Great mathematicians such as L.E.J. Brouwer argued that such dogmatic beliefs should not be used within mathematics, because it causes horrible foundational problems of paradox, undecidability, and incompleteness. Brouwer went on to establish the mathematical philosophy of intuitionism, and then built an entire mathematical system ontop of that. In effect, he created mathematical intuitionism, just as each mathematician creates (or recreates depending on how you look at it) mathematical concepts in their mind.

    The Platonic idealism has been a cancer on the foundation of mathematics for thousands of years. Please, stop and realize that the Platonic idealism is nothing more than a belief system, and witness how it has partially destroyed mathematics.
    THERE ARE ALTERNATIVE MATHEMATICAL PHILOSOPHIES! [barnesandnoble.com]
  • This could be used to argue against the principle of Occam's Razor (which says that the simplest theory that fits the facts of a problem is the one that should be selected), because science is based on the belief that mathematical concepts can be usefully projected onto the perception of nature. If it turns out that the mathematics used is horribly complex and disconnected, then Occam's Razor could cause a scientist to turn from the truth more often than he/she is turned towards the truth by the principle.

    Note that I use the term "truth" with regards to scientific "truth", realizing that science can never in fact portray any absolute truth, as is the normal definition of truth (i.e. undeniable truth). This is why science has evolutionary mechanisms built in like peer review and disproving old theories.
  • Actually, you couldn't be more wrong. If integer arithematic cannot be proven to be consistant (free from contradiction), then there is the possibility that you could wake up one day and have 2+2=5. I am not claiming that any of this will happen, because I have no mathematical proof, but the whole problem with a lack of consistancy proof is the problem you have mentioned... waking up only to realize that your math was nothing more than a "Matrix" (in the movie sense) so to speak.

    Time and time again, the chosen philosophy of mathematics used for a foundation of mathematics as shown to cause huge differences in the actual mathematical system. Bertrand Russell's paradoxes (A set which contains all sets that do not contain themselves. This set both contains itself and doesn't contain itself.), Brouwer's Intuitionism (mathematics is complete, and undeniably consistant in with this philosophy), the Platonic Idealism (you have foundations that are of the quality of the foundations of Christianity), Formalism (Godel's Incompleteness Theorem says that we can never know if this system is flawed or not), etc...

    Saying that philosophy does influence mathematics down to a consistancy level ignores hundreds of years of mathematical history! Philosophical foundations can lead to actual contradictions. This is why philosophy has an extremely important role in mathematics.
  • Dude, every library has interlibrary loans. You wouldn't believe how cool libraries really are. Working the system is worth it when it comes to libraries. Many libraries take interlibrary loan requests online. So pop a couple requests off, you get an email a week later, and then you pick up the books from your local library. Dope stuff, eh?
  • In the voice of Homer Simpson, "I'm just a man"!
  • Well, most mathematicians would like to have an absolute undeniably correct system of knowledge, and not lasting forever would be a problem. Intuitionism solves these problems, but causes problems of difficulty of complex high-level proof and it causes problems when communcating between two mathematicians. If you ever study Brouwer's Intuitionism, you will see why it works, and why it has the mentioned flaws.
  • You can peer review anything, but it doesn't make it a science just because you peer review it. Peer review is not an intrinsic part of most mathematical systems.
  • They cannot be compressed

    Well, we could agree to call it W_UTM. Then all we would have to do to compress it is send W_UTM and people would know what it was.

    Of course the codec would eventually become infinitely large, but as long as our pace of discovering stuff like this doesn't outpace Moore's law, we are fine.

    I'm only semi-serious.

  • Yeah, exactly.

    I don't get that either. What if I claim the number 'turns out' to be exactly 0.5 ? Can anybody disprove me ?

    Salsaman

  • To turn the old saw on its head:

    Two does equal one for sufficiently small values of 2.

    Actually, using the old mathmatical parlour trick of showing that 1.99999. . . = 2 one could at least show that 2=1 when rounded to the lowest integer.

    KFG
  • Freedom is the freedom to say 2 and 2 is 4, everything else follows from there.

  • From "The Omega Number", by Robert Ludlum:

    "Do you realize what this means?" Johnson looked at the mathematician worriedly. "I have to report this to the CIA. I'm sorry."
    "But why? What does this have to do with national security?" asked Thomas.
    "I can't tell you. In fact, it's--" Suddenly a shot rang out, and Thomas watched in horror as the Dean of Mathematics slumped forward, a surprised look on his face. He caught Johnson in his arms as half a dozen more shots were fired into the office, and dragged him frantically behind a desk.
    He looked down and saw that the shirt was red. That was bad. Then he saw that the redness was spreading. That was very bad. The shots stopped, but Thomas' ears kept ringing.
    "The...Omega number..." gasped Johnson.
    "Don't talk! Save your strength!"
    "I'm dead...anyway...you have to...tell the CIA...can't let...the Soviets...know...about the hole...the weaknesses...in our mathematical...model..."
    The dean stopped, gave a pitiful little gasp, and went limp in Thomas' arms.

    It's not his best work by any means. But it was oddly prophetic.

  • Just gunzip the hex representation of the Omega number.

    --
  • "Disagreeing with the assumptions...."

    Mathematics doesn't really have the kind of assumptions you can disagree with. Two, Four and plus are not the same things in mathematics that we may call two and four in the outside world. Two is by definition the successor of the succesor of 0 and four has a similar definition. Therefore the conclusion that two and two is four follows inevitably from the definitions of two, four and plus because by definition these are the things obeying the appropriate axioms (this may be confusing because we actually use the same words to refer to real world concepts, and concepts in various axiomatic systems which arent always the same thing).

  • He does NOT mean that every theorem out there may be riddled with logical gaps. He is not questioning the validity of the proof or the theorem. He is rather pointin gout that there may be many true relations which are unprovable.
  • Yes its been known for some time (studied it in class two years ago so it must have been around for a good deal of time before then).

    His stuff is certainly interesting, and his results about the omega number are bizarre but you are right it isn't THAT revolutionary. Once you accept the results of Godel's theorem the fact that you can somehow concentrate all that unprovability in one place drives the strangeness home but isn't fundamentally upsetting.
  • Err...I wish that link hadn't been been broken. Evidently this is what he was saying, even though the other link made it seem a bit different.
  • by Syllepsis ( 196919 ) on Sunday March 18, 2001 @01:50PM (#355355) Homepage
    I think that based on the lecture notes, the New Scientist is just trying to make a sensational article out of a nice lecture on a few of the more surprising highlights of 20th century math.

    First of all, the statement that mathematical theory is riddled with holes is questionable. Finding an unprovable statement is a rarity that happens once every so often. Most things can either be proven or separated out as an axiom (such as the Continuum Hypothesis). Granted, every formal algebraic system is going to have at least one unprovable theorem, but no one has attempted to count out the percentage of theorems that are provable.

    The New Scientist is claiming that the percentage of provable theorems is small compared to the number of theorems in any given system. This is akin to the idea that the rational numbers are "sparse" on the real number line. Such a statement about a formal system, such as the ZFC axioms of set theory, needs to be proven.

    It would be an interesting study to Godelize ZFC in an intuitive way and use automated theorem proving to see what percentage of statements of a given length are theorems, and what percentage of those could be proven with proofs of less than a certain length. Then by asymptotic analysis one might be able to make a statement to see if "most" theorems could be proven.

    Such a study would be similar in method to Graphical Evolution, but would require quite a bit of supercomputer time. Even then, some really difficult proofs would have to be made. However, one does not know if the statement is provable :)

  • by m51 ( 255152 ) on Sunday March 18, 2001 @01:06PM (#355380) Homepage
    As things get circulated like this in spells every now and then, it becomes time to recirculate an important theme: philosophical problems do not equate to mathematical inconsistencies. By standards of purely mathematical order, there aren't holes such that you might wake up tommorow to discover that 2+2 suddenly equals 5. ^_^ A parallel to this type of discussion can be given from quantum mechanics; the Schrodinger's Cat paradox. While it does present serious philosophical and logical problems, what it does not do is poke any actual holes into quantum mechanics. Anyone particularly interested in this topic should check out the work of Godel, who did some very intriguing work earlier in the 20th century.
  • I followed your link. Here's what I found:

    bn.com customers who bought this book also bought:
    Programming in Visual Basic: Version 6.0

    I rest my case!
    --
    #include "stdio.h"
  • Right. And we can compute W_UTM with a SAFEARRAY of VARIANTs.
    --
    #include "stdio.h"
  • Actually, the complete set of real numbers can be mapped between any two rational numbers, or any two real numbers for that matter.
  • This reminds me of that cartoon where all these equations are on the left and on the right, but in the middle is a box with "then some magic happens" written in it.

    I understand your argument that W_UTM is "unknowable" (non-computable, anyway--surely GOD knows it). And I understand how to write a number in binary. Then we hit this: "Well, it turns out these 0's and 1's have no mathematical structure. They cannot be compressed."

    Yeah the old "it turns out" argument. I guess he's leaving that part up to the students? Or maybe the proof is "trivial"? I can understand leaving out details, but for crying out loud this is the whole POINT of the research. WHY don't they have structure?

    In any case, I still say it's unsurprising. I would have been skeptical of a claim (Godel's) that there are an infinite number of theorems without there ALSO being an infinite source for those theorems to theorize about.
    --
  • by BillyGoatThree ( 324006 ) on Sunday March 18, 2001 @12:30PM (#355398)
    As someone else mentioned, this sounds like a pretty simple application of an (admittedly difficult) earlier result by Godel: Given a formal system of "sufficient power" there will always be theorems expressible but unprovable in that system. (And if you add the "missing theorem" to the system, the resulting system has the same "problem", ad infinitum)

    Considering that Godel's stuff came out in 1933(?) and if this work really IS just a restatement of that fact then I doubt the "foundations of mathematics are in an uproar".
    --

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...