Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math AI Science

Number Theorist Fears All Published Math Is Wrong (vice.com) 123

An anonymous reader quotes a report from Motherboard: Kevin Buzzard, a number theorist and professor of pure mathematics at Imperial College London, believes that it is time to create a new area of mathematics dedicated to the computerization of proofs. The greatest proofs have become so complex that practically no human on earth can understand all of their details, let alone verify them. He fears that many proofs widely considered to be true are wrong. Help is needed. What is a proof? A proof is a demonstration of the truth of a mathematical statement. By proving things and learning new techniques of proof, people gain an understanding of math, which then filters out into other fields.

To create a proof, begin with some definitions. For example, define a set of numbers such as the integers, all the whole numbers from minus infinity to positive infinity. Write this set as: ... , -2, -1, 0, 1, 2, ... Next, state a theorem, for example, that there is no largest integer. The proof then consists in the logical reasoning that shows the theorem to be true or false, in this case, true. The logical steps in the proof rely on other, prior truths, which have already been accepted and proven. For example, that the number 1 is less than 2. New proofs by professional mathematicians tend to rely on a whole host of prior results that have already been published and understood. But Buzzard says there are many cases where the prior proofs used to build new proofs are clearly not understood. For example, there are notable papers that openly cite unpublished work. This worries Buzzard.
"I'm suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I've seen them wrong before," Buzzard told Motherboard while he was attending the 10th Interactive Theorem Proving conference in Portland, Oregon, where he gave the opening talk.

"I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."
This discussion has been archived. No new comments can be posted.

Number Theorist Fears All Published Math Is Wrong

Comments Filter:
    • Wonderful (Score:5, Funny)

      by nsuccorso ( 41169 ) on Thursday September 26, 2019 @10:48PM (#59241544)
      Slashdot Poster Still Hasn't Learned Not to Trust Slashdot's Mobile Interface
      • by skovnymfe ( 1671822 ) on Friday September 27, 2019 @01:43AM (#59241804)

        You're right, though. G is entirely based on a speculative unit of measurement called "time", or more specifically, "the observable interval between two things happening".

        There's no one who dares to ask the question "are we basing all of our physics understanding on something we can only observe and measure using the observation we just made?" And the answer is yes, yes we are. Every standard unit is based on time and time is based on "however long it takes a Cesium atom to do a thing". So if Cesium ever decides to act up, we're all fucked. The universe will literally stop behaving according to the laws of physics as we're used to it.

      • He is talking about Graham's Number [wikipedia.org]

  • It doesn't even matter
    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Friday September 27, 2019 @03:09AM (#59241946)
      Comment removed based on user account deletion
      • " without ANY knowledge of the original pattern used to come up with what letter swaps with what."

        Not quite, you must know the language of the original message otherwise your frequency analysis and cribs are worthless.

        • " without ANY knowledge of the original pattern used to come up with what letter swaps with what."

          Not quite, you must know the language of the original message otherwise your frequency analysis and cribs are worthless.

          Actually, no. Given a sufficiently-large corpus of enciphered material, nearly all wrong substitutions will produce an output that is approximately uniformly-distributed. A small subset of these, including the correct one, will generate non-uniform decipherments. It's then possible to examine the resulting candidate decipherments to identify the correct one even without a priori knowledge of the language, and without any cribs.

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday September 27, 2019 @11:50AM (#59243242) Journal

        Probably a good bit of cryptography depends upon certain things being true, for example.

        Not really. The asymmetric primitives (public/private key primitives) we currently use are based on number theoretic constructs, but they are based on very old unsolved problems, none of which are provably unsolvable, as yet -- though we have proved that some of them are equivalent to one another in the sense that if we can solve one we can solve them all. The hash functions and symmetric primitives we use have little or no theoretical basis, and keep in mind that these are the actual workhorses of cryptography, the ones we apply to actual data, unlike the asymmetric primitives which are generally applied to symmetric keys or hash outputs. We have some ideas about what seems to make them strong (mixing, diffusion, substitution) and we know various ways of breaking them (e.g. linear cryptanalysis, differential cryptanalysis) so we make sure that they aren't vulnerable to those techniques. We test their vulnerability mostly by simplifying the primitives (e.g. reducing the number of rounds of repetition) until our techniques do break them. Then we view the differences between the simplified versions of our primitives that are broken and the full versions as the "margin of security".

        There are a lot of security proofs in modern cryptography, but they don't depend much on the sophisticated mathematics Buzzard is talking about. And, actually, they aren't really "proofs". Or, rather, they're proofs but they don't prove what we want them to. Cryptographic proofs always begin with information-theoretic assumptions that are analogous to "assume a spherical cow...", in which complex and poorly-understood components are assumed to have idealized properties. The most common example is the Random Oracle model, in which a hash function is assumed to be a truly random function. This is obviously not true -- and note that I'm not saying cryptographers believe it to be true, they understand perfectly well the limitations of their proofs. It's useful because it's the closest we can get to an actual proof, and it does provide some level of confidence that we wouldn't otherwise have. But at the end of the day they're proofs about idealized things which aren't the ones we actually use.

        It's not impossible that there's something in all of the uncertain work Buzzard is pointing that that could affect the security of our cryptosystems, but I think it's very unlikely. I think it's somewhat more likely that they could affect the security of the quantum-resistant primitives that are now being created, but that may just be because I'm looking at two piles of things and assuming they may be related because they share the property that I don't understand either of them.

  • ... much of how we approach math is at too high a level of abtraction. AKA all of the 'math' we are doing is literally cognitive devices to help with nature's book-keeping when trading and trying to measure things in the real world. Abstract "proofs" are great. But I think the whole idea of "proof" is a sham in a sense, in the sense that we aren't "proving" anything, we've invented mathematical worlds... the base units of which these worlds are composed of are 'embedded' in the universe.

    Science has disco

    • You are on to something for sure. I agree and I have often puzzled at how people learn or understand New things by sayinb they are like something else they think they understand. Using metaphore to grasp new ideas or concepts. It's easy to become comfortable with metaphoric understanding that is all completely unprovable or just plain wrong. In many areas at many levels of intellect. When I'm puzzled by something new and the light bulb comes on because someone says: "it's like such and such" I have to remi
    • by Dorianny ( 1847922 ) on Thursday September 26, 2019 @11:06PM (#59241568) Journal
      The problem is that people often confuse "mathematical proof" with being very sure. In every other field if you are 9.9999% sure you have "proof" but in math that counts for nothing at all
    • by rtb61 ( 674572 )

      I go with simulations as being the best way to use math, simulating the real world, under specific variables rather than quick dirty math formulaes.

      Thought as a simulation. With regard to the application of maths, the concept of treating all problems as simulations, rather than individual mathematical formulae, a series of mathemetical formulae working in conjunction to accurately represent that problem as a series of continuous calculation and altering the variables of those calculations to produce results

    • by sg_oneill ( 159032 ) on Friday September 27, 2019 @12:17AM (#59241676)

      What the hell has some wingnut conservative video of a random guy complaining about "reason" got to do with Science.

      Science has made no such claim that 'we are already at too high a level of abstraction in our "normal" thought'. Thats absurdist nonsense that somehow manages to take an is-ought fallacy and mangle it into a tautology. Its the sort of talk educationally delayed antivax moms make, not scientists.

      Nobody working with math thinks that maths is reality. That idea died with Pythagoras. But Abstraction is both a valid and historically useful tool for getting shit done.

      And anyway this has nothing to do with what the actual issues raised which is that proofs have gotten too numerous and complicated. And thats not "abstraction", its just plain old Math

    • by Kjella ( 173770 ) on Friday September 27, 2019 @12:31AM (#59241688) Homepage

      AKA all of the 'math' we are doing is literally cognitive devices to help with nature's book-keeping when trading and trying to measure things in the real world.

      It's more like the other way around, math is purely abstract models that may or may not be applicable in reality. Like 2 apples + 2 apples = 4 apples, but when people have sex they often find that 1+1=3. The flaw is that the physical act of reproduction doesn't map to the mathematical operation of addition. We can model having four apples as +4 and owing four apples as -4 and I can put my 4 apples in a 2*2 square so the square root of four is two but what's the square root of apples owed? Not meaningful, because I don't actually have any apples so I can't put them in a square. But if I wrote four notes saying "IOU: one apple" I could. Mathematicians simply define models, it's everyone else trying to map reality to a model.

      We know there are multiple models that depend on the ground assumptions you make, for example in geometry you have hyperbolic, euclidian and elliptic [wikimedia.org] geometry depending on whether two lines connected at a right angle converge, diverge or stay parallel. All three are mathematically valid, consistent but mutually exclusive models. But in mathematics that doesn't mean one model is good and the other two are junk. It means that one model of geometry is appropriate if you have a flat piece of paper and a different model is appropriate if you approximate the surface of the earth as a 2D skin on a 3D orb. It's a toolbox, you still have to pick the right tool for the job.

    • When we need to sit down and try to observe correctly what our brain is actually doing first.

      First, it's inverting the image...

      • by Dunbal ( 464142 ) *
        Reverse the polarity?
    • by Bengie ( 1121981 ) on Friday September 27, 2019 @07:53AM (#59242352)
      Let me start with I overall agree with the direction your thinking is going, but I will try to rise some concerns about your dismissive conclusions. First off, math is currently the best we got. It may never be perfect, but don't let the lack of perfection of anything prevent you from making actual forward progress

      "proofs" aren't really "proofs" but our misunderstanding of the truth

      "Truth" doesn't actually exist. It is purely a philosophical debate. On this note, even facts are context sensitive and open to interpretation, meaning facts are subjective. In short, there is no true perfect notion of "correct".

      You're even falling into your own trap. The word "proof" is an abstract concept in your mind with a seemingly absolute perfection definition of "this is true". Here's some food for thought. If the concept represented by the word "proof" requires absolute perfection like you're implying, and from practice and experience we know beyond a shadow of a doubt that humans are anything but perfect, then you might say that a "proof" is impossible. But wait... Did I just "prove" that a "proof" is impossible? I hope you will realize that at some point down this philosophical thought experiment of a rabbit hole that the notion of an "axiom" is a requirement to any form of reasoning. At some point, you just need to blindly accept something as "true" and build everything else on top of that.

      In the end, you are limited by your senses. How do you know the world around you is even real? It really is just your brain trying to make sense of inputs. Even our senses are technically not "real". Color, sound, touch, are all abstract perceptions of raw information that we don't even have access to. I decided for myself that I will trust my perception and reasoning because it's the best that I have access to. In the end, the proof is in the pudding. Using math as an example, look at Calculus. Essentially fancy algebra with a notion of a "limit". Some abstract concept of approaching a number by getting finitely close to it, but never quite reaching it because the act of actually reaching it would defeat the purpose. Calculus is undeniably a powerful tool, yet definitely falls into a "purely abstract math".

      Here's a fun thought experiment. How do you know the sky is blue? Prove it. You might want to start with the definition of blue. For example. Turns out the wway the brain interprets "blue" is by defining it as what it is not. Most humans have 3 primary colors that they can sense. RGB. B = !R&!G; G = !R&!B..... So even the "concrete" idea of blue is just an abstract relation to two other abstract concepts. Not to mention that it's circular reasoning. But it works, I can "see" blue.

      For real reasoning exercising, look into set theory and the definition of numbers. Also fun topics are infinite sets of different sizes, and unbounded finite sets of different sizes. In general, set theory is awesome for these kinds of thought experiments and working with the abstract in a controlled manner.

      • As a minor issue, your "blue" argument is wrong. The typical human eye has red, green, and blue cones, "blue" is the sensation of a blue cone being stimulated. The sky can be shown to be bluish under normal cloudless sunny conditions by comparing a spectrogram of the sky against a spectrogram of the light bouncing off a white surface, and noting the relative magnitude of the blue portion of the spectrum in the sky spectrogram. Blue is not "the absence of green" combined with "the absence of red", because th

        • by Bengie ( 1121981 )

          The typical human eye has red, green, and blue cones, "blue" is the sensation of a blue cone being stimulated

          The eye does not see anything, it is a raw input. The concept of "sight" is purely abstract and what I described as "blue" is how the brain reasons about colors.

          Axioms should not contradict each other

          You can't define an axiom without another axiom. It's turtles all the way down. How can I prove an axiom does or does not contradict another axiom? Even something as "simple" as proving "1 equals 2 is false" cannot be "proven" except with with axioms of its domain and even then requires additional implicit philosophical assumption that our reasoning

      • "How do you know the sky is blue?" It isn't, I just looked outside and it's black, apart from a few tiny spots of more or less white.

  • by SJ2000 ( 1128057 ) on Thursday September 26, 2019 @10:56PM (#59241552) Homepage
    The seL4 guys have been creating tools and techniques [csiro.au] to do this and they're utilised in the seL4 formally proved micro-kernel. Proof Engineering Considered Essential [nicta.com]
  • Largest integer (Score:5, Interesting)

    by Okian Warrior ( 537106 ) on Thursday September 26, 2019 @10:59PM (#59241556) Homepage Journal

    The set of integers has no largest element - if you sort by a binary operator that compares the numerical values.

    You can list/sort the integers in other ways, in which case it's still the same set and there can be smallest and largest elements.

    For example, sort the integers alphabetically by their spellings. In that case the first element is "eight", and the final element is "zero". All integers are accounted for, and can be found in-between the first and last element. Any two numbers can be individually compared to determine which goes "to the left" of the other.

    You get lots of weird quirks like this when dealing with infinity - a lot of misunderstanding comes from assuming that infinity follows the rules expected of finite numbers.

    (Start with "eight" followed by "eighteen" and "eighty" and "eight billion" and an infinite set of numbers beginning with the text "eight" which collectively has the same number of elements as all the integers... before you start with the next infinite set of numbers starting with eleven. Weird.)

    • The "size" of infinity is misunderstood but it's really simple. Take the set of integers - it's infinitely large. But real numbers are a larger infinity because you basically take each integer and map the entire infinite set of integers to each member, seperated by a "period" so 1 the integer becomes 1.15276400... and now the set of members between any two adjacent whole real numbers is the infinite size of all integers. one infinity is the same size as another if you can do a 1-1 mapping of every eleme
    • That's silly. You're hashing members of an infinite set. You can do that to anything. It's like saying that the population of the Earth is 1 because we're all people.

  • A little overhyped (Score:5, Insightful)

    by Coryoth ( 254751 ) on Thursday September 26, 2019 @11:09PM (#59241570) Homepage Journal

    While it is nice to see an article discussing real issues in mathematics, it is a little sensationalist in places. While formalized proofs amenable to computer proof checkers are definitely a good thing, it isn't really the case that math is going to fall apart or turn out to be wrong any time soon. For starters a surprisingly large amount of mathematics has actually been fully formalized to that level: check out metamath [metamath.org] for example. Second, while complex theorems at the edge of current research may potentially have an issue (it was issues with some of his proofs in very abstract homotopy theory and motives that made Voevodsky a convert to computer formalizable proofs), the vast majority of math is used routinely enough by enough mathematicians (and verified by students as exercises) that errors are at this point vanishingly unlikely. Finally, if by some bizarre circumstance something did go wrong, the result would be a patching of axioms assumptions to make most of the mathematics we have true again -- at the end of the day math has to be useful; as Russell said in Introduction Mathematical Philosophy:

    We want our numbers not merely to verify mathematical formulæ, but to apply in the right way to common objects. We want to have ten fingers and two eyes and one nose. A system in which âoe1â meant 100, and âoe2â meant 101, and so on, might be all right for pure mathematics, but would not suit daily life. We want âoe0â and âoenumberâ and âoesuccessorâ to have meanings which will give us the right allowance of fingers and eyes and noses. We have already some knowledge (though not sufficiently articulate or analytic) of what we mean by âoe1â and âoe2â and so on, and our use of numbers in arithmetic must conform to this knowledge.

    The practical applications of mathematics to science and engineering have been so rich and so effective that any new "corrected" theory of mathematics that didn't recapitulate the same results would simply be ignored or rejected. We would instead modify axioms, or logic itself until we could regenerate much of the mathematics we have.

    • My understanding from reading the article is that when he said, "Published math" he was referring to "math that is presently being published," not referring to all math that was ever published.

      He specifically said it's unlikely that all of math, or even a large important part of it ("great castles"), is wrong.
    • by pjt33 ( 739471 ) on Friday September 27, 2019 @03:06AM (#59241938)

      More than a little overhyped, but if the headline were "Keynote speaker at Interactive Theorem Proving conference thinks that Interaction Theorem Proving is worth doing" then people would see it for the non-story it is.

    • by GuB-42 ( 2483988 )

      Also worth noting that the idea of mathematical perfection took a serious hit with Godel's incompleteness theorems.
      So yeah, patching things to make math more useful is not really an issue.

  • The statement, "I'm suddenly concerned that all of published math is wrong" is ill-defined.

    Until "all of published math" is defined properly his argument is flawed and meaningless. This possibly undermines all of his published arguments.

    • Except that was the editor; and the author said, "some", not, "all".

      Pretty big difference in meaning, in general, let alone a story about math. Have I mentioned how much I dislike people who speak primarily in hyperbolic terms?

  • for the crypto efforts by the NSA and GCHQ?
  • by Rick Schumann ( 4662797 ) on Thursday September 26, 2019 @11:36PM (#59241604) Journal
    If what this guy is saying is true and our highest levels of math have become so complex that we mere humans can't be counted on to accurately verify mathematical proofs, then how can we be trusted to write code that does the verification of proofs and expect it to be any more accurate? Or are we going to continue to fall into the trap of 'the computer says it's right therefore it must be right'? Bad code gets written all the time and in some cases goes undetected as bad for years, and even more scary is bad code that never gets detected as bad.
    • Read Isaac Asimov's story "A Feeling of Power".

      • I read that story many years ago as a teenager and it really left a mark. I never could remember the title, however. Thanks so much for posting!

    • If what this guy is saying is true and our highest levels of math have become so complex that we mere humans can't be counted on to accurately verify mathematical proofs, then how can we be trusted to write code that does the verification of proofs and expect it to be any more accurate?

      All you need is a small program that implements the rules of math and verifies that the steps in the proofs follow from those rules, one step at a time. The program would be the same regardless of the math complexity.

      Fo

      • You're still open to errors caused by the runtime environment generating an invalid proof while executing a correct program.

        Memory error correction may help, but only to some extent.

        • You're still open to errors caused by the runtime environment generating an invalid proof while executing a correct program.

          Then run it through the dozen independently-written verifiers I mentioned (in different computer languages) on many OSs and different physical computers. Insert artificial errors in the proofs and make sure they are detected. You have the algorithm and the source code; walk it through a few proofs by hand until you're convinced it does the right thing.

          Seriously, at what point wou

          • walk it through a few proofs by hand until you're convinced it does the right thing.

            Isn't that what the article was arguing for? The point is that, to trust automated provers to be doing the right thing, you need to understand how they work.

            If the total computer proof is so large, complex and low-level that you don't understand the large structural concepts of what the proof is trying to achieve, you'll never be sure that the the consequence is derived from the premises, even if each of the low-level steps

    • Suppose I ask you to manually verify the result of:

      1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 ... + 2087 + 2089 + 2091 =

      It's very likely you'll make an error while doing those thousand additions.

      However, it is very easy to write computer code that adds correctly - a thousand times or a million times. It's fairly easy to prove that it adds correctly.

      The computer need only perform the individual operations correctly. Once you know that it handles each symbol correctly, you can string together however many symbols you

      • by mark-t ( 151149 )

        The sum of any finite length arithmetic series can be trivially calculated by hand without having to do any of the individual additions, but instead just replacing all of the additions with a single multiplication, which can all be done by hand, and therefore done manually.

        Without even taking out a piece of paper to calculate it exactly, I can visually recognize that the sum of that particular arithmetic series is the square of 1046. If I then calculated that particular result by hand, how would that n

        • It's not manual verification because you are relying on a theory you haven't yet proven to be true.
          Yea you could prove it, but you didn't. The entire point is people are relying on similar things (obviously more complicated) that have 'been proven' before, but maybe there is some doubt if they have actually 'been proven' correctly.
          • by mark-t ( 151149 )

            I didn't prove it because I didn't feel like wasting my time proving something that has already been proved and shown to be part of not just mathematics, but even simple arithmetic. Honestly, if I had such an inclination at the time, I would have also done the multiplication... which I didn't do because I didn't think it was necessary.

            However, if you insist upon a proof that the sum of a finite arithmetic series can be replaced by a multiplication then consider the following (I shall use the * to symbol

            • Not trying to be snarky or anything and appreciate your effort. But I think you missed the point. I wasn't asking for the proof. And obviously knew it existed. Just trying to show that people assume all the proofs are verified actual proofs without checking. The guy from the article has some doubts.

              Great, you've done the first one for them. Now do the rest. Let me know when you get to here.

              The greatest proofs have become so complex that practically no human on earth can understand all of their details, let alone verify them.

              At some point the proofs get too complicated for most people to understand. A computer program designed to verify thos

              • by mark-t ( 151149 )

                The fact that I didn't initially present the proof did not alter the veracity of the original statement... the original comment suggested that the actual sum of a particular series of integers could be impractical to manually verify because of the sheer number of them.

                But that particular series possessed a provable quality, that of being an arithmetic series, that made simplification of the sum to multiplication possible. I knew this principle, and exploited it. It was no less true before I offered the

      • Re: (Score:3, Informative)

        by bramez ( 190835 )

        FYI Gauss found out when he was 10 that it is easy to create a formula for this type of question

        see e.g. http://mathcentral.uregina.ca/... [uregina.ca]

        The real challenge is how to communicate the mathematical reasoning behind this formula so that it is "believed to be correct beyond doubt".

        What does it mean that a mathematical reasoning is "correct" or "true" ? Then you enter the field of philosophy of mathematics, formal logic and formal theorem proving.

      • by lurcher ( 88082 )

        IMHO, any statement that includes "it is very easy to write computer code that" is generally wrong. I am old enough that when I was studying computer science, it included operational research. That included investigating code that performed matrix operations like inverting. Its is very very hard to produce an algorithm that gives the correct answer for such a simple (seemingly) problem for all input numbers with consistent accuracy.

        • > IMHO, any statement that includes "it is very easy to write computer code that" is generally wrong.

          I suppose anything is hard if you don't know how, and many things are easy if you do know how. My daughter informs me that it is hard to consistently write numerals facing the right way - to avoid writing E when she means 3. That's hard for her right now, harder than reading Hop on Pop.

          For those skilled in the art, for the people who *would* write provable code, writing provable code for "plus" is easy.

  • Math is a Game (Score:5, Insightful)

    by kenwd0elq ( 985465 ) <kenwd0elq@engineer.com> on Thursday September 26, 2019 @11:43PM (#59241616)

    ALL mathematics is a game; make up some rules, and see where that leads you.

    SOME mathematics turns out to be useful in describing the real world. These are the numbers that engineers use.

    Some mathematics do NOT describe anything useful in the real world. And some mathematics that USED to be "not useful" suddenly BECOMES useful when our understanding of things evolves. For example, "imaginary" numbers were a mathematical trick, just a game you could play with numbers - until Maxwell discovered that you could describe electromagnetic radiation using "imaginary:" numbers.

    Some branches of math are still just a playground - and will remain so until we get off the Earth and out of the Sun's gravity well.

    • by mark-t ( 151149 )

      For example, "imaginary" numbers were a mathematical trick, just a game you could play with numbers - until Maxwell discovered that you could describe electromagnetic radiation using "imaginary:" numbers.

      Or if you are finding all of the real roots of an arbitrary cubic polynomial happens to have three distinct roots

    • Re:Math is a Game (Score:4, Informative)

      by JasterBobaMereel ( 1102861 ) on Friday September 27, 2019 @05:15AM (#59242056)

      Quantum theory needed a way of sorting out the mess of particles they had discovered ... the mathematicians replied - Group Theory - here is the definitive work on it ... it was perfect ...

      It had found no use outside mathematics since 1832 ... but they had worked on it anyway, so when it was found useful it was a fully formed theory

    • Euler's demonstration that e^it = cos(t) + i*sin(t) preceded Maxwell by more than a century.
  • by hcs_$reboot ( 1536101 ) on Friday September 27, 2019 @12:32AM (#59241696)
    Almost nobody can prove them wrong.
  • This new story seems to be a bit of hyperbole, and sensationalist in "the sky is falling!" But overlooked in the 1976 computer proof of the Four Color Theorem. And there are "castles on sand" in that Goedel's Incompleteness Theorem states some statements in a system of mathematics can be neither proved nor disproved. They are accepted as true. But if a computer creates a mathematical proof that is so convoluted and complex that a human being cannot understand it, then coming full circle to the original pro
  • Gödel's incompleteness theorem showed that.

    All mathematical "proofs" are based on basic axioms (like Zermelo-Fraenkel set theory), that themselves can not be "proven".
    Which means that unless they are backed up by actual (statistically reliable) observation in the real world, it's all just a flight of fancy which as much justification to call itself science as religious beliefs.

    Because in the real world, there is no such thing as "proof". Statistically reliable observation of theory predictions is the b

  • This was more Computer Science than Math, but I'm throwing it out there as it sounds similar ...

    I was an undergraduate research assistant from 1985-87 working for a professor at ODU, Dr. Chris Wild, (who is now deceased) who was studying automated programming and proof techniques for abstract data types (eg: stacks, queues, etc...) under a NASA grant (NAG 1-439). He was investigating methods for automatically generating code and proof of correctness from formal specifications. I implemented his ADT idea

    • Goodness, that brought back memories...of programming computational linguistics stuff in LISP and Prolog in that same timeframe, on some of those same machines. I could probably find some of our TRs, although I doubt they're on the web.

  • As a computer scientist working on mechanized proofs, I can say that the clickbaity headline is technically not far from truth. However, "Published Math Is Wrong" should be understood as "published proofs are not verifiable". It does not mean that the proven theorems are incorrect.

    Most often, proofs (that rely on case analysis) focus only on the important cases. The omitted cases are said to be obvious. Sometimes they are not.

    Another class of mistakes are your usual off by one errors that are common i
  • by bradley13 ( 1118935 ) on Friday September 27, 2019 @01:52AM (#59241820) Homepage

    This is not any sort of new idea. "Automated theorem proving" is a well established field. Just as an example, I worked briefly as a post-doc in Alan Bundy's group in Edinburgh in the early 1990s, creating systems that could automatically process proofs [springer.com]. The whole group was focused on automating the process of constructing and checking proofs, and they certainly weren't the only ones.

    TFA refers to a relatively new framework (LEAN [github.io]) that lets people enter proofs into a system. This system doesn't really bring anything new to the table, beyond perhaps a better UI. These systems work well enough for smaller proofs, but complex proofs are still constructed by fallible humans. Progress in this area is slow, because it's an incredibly hard problem.

  • ...or agda, or idris, or isabelle/hol.

    If they use a programming language or theorem prover, their proofs will be machine verifiable.

    We won't always understand why they are correct, especially as these systems incorporate better and better tactics, but we'll know they are. (This problem will also happen when we off-load proving stuff to AI. We'll know it's fine, just not why.)

    The problem of understanding code from that point on will become a problem of code readability, and we do know how to deal with that i

  • There's also a non-zero chance that one of those sacred GUIDs will collide with another in your precious ERP system, causing an obscure error you might never solve. There's also a non-zero chance Earth will be destroyed due to collision with a foreign body. Where is your math-God now, puny humans?
  • by tinkerton ( 199273 ) on Friday September 27, 2019 @02:52AM (#59241918)

    There are two types of scientists.
    The first type sees science as a walled area. Once an article passes peer review and is published it has proven its merit and they trust it.
    The second type sees publication as just one (big) step forward in credibility. These are the old style fundamentalist scientists.
    I like the second type much more but it also takes a lot more work. The first category can work much faster, and that is very nice if the material they rely on is good. It also has the schemers who try to interfere with publication of articles they don't like.

  • Oh, turns out that's not true. Only all Vice headlines are wrong. Some is PAYING real world money and employing the person who wrote this headline! Wonderful... Only thing I can think of that would be worse is if it was an intern who wrote the damn thing and an actual employee, 'corrected', them on choice of headline...
  • Fermat / Vice (Score:5, Insightful)

    by bill_mcgonigle ( 4333 ) * on Friday September 27, 2019 @05:51AM (#59242090) Homepage Journal

    Absolutely nobody thinks Andrew Wiles's proof is the one Fermat had in mind, yet people act like the Fermat work is done. Proofs are given more weight than understanding and that infects physics as well.

    Also, stop linking to the clickbait horseshit at Vice and use real sources.

  • All bread might contain baker's hair ! Sky falling, NOW !!!
  • by Darren Hiebert ( 626456 ) on Friday September 27, 2019 @06:27AM (#59242172) Homepage
    Research shows that journalists use Slashdot as a testbed for misleading headlines.
  • ALL published math? (Score:5, Informative)

    by fazig ( 2909523 ) on Friday September 27, 2019 @07:24AM (#59242296)
    What a garbage headline!
    Clicking on the Vice article the subtitle reads:

    "I think there is a non-zero chance that some of our great castles are built on sand," he said, arguing that we must begin to rely on AI to verify proofs.

    And this has a completely different ring to it. And makes the statement a lot more plausible.
    Because could anyone really provide proof that ALL published math is wrong?

    Thinking back at the very first math lectures at the university where we started with set theory, we were shown to be very very careful with absolute statements like "all" or "none". Because all that is required to invalidate such a statement is a single example to the contrary.

  • Is all published math wrong?

    "I'm suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I've seen them wrong before,"

    or is it just a small subset?

    "I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."

  • Comment removed based on user account deletion
  • Automath was initiated decades ago; right now there are quite a few Interactive Theorem Provers or Proof Assistants: Mizar, LEAN, Coq, Isabelle and many others, being developed by a number of groups and independent mathematicians. They differ as to methodology, axioms, philosophical approach, user interface etc; so, depending on intended target results/math areas, and personal proclivities of course, you might opt for one or another. BTW, indeed there are a few known cases where published and seemingly est
  • Theoretical scientists don't prove anything by definition. They are just creating fantasies, which might become true.

You are always doing something marginal when the boss drops by your desk.

Working...