Number Theorist Fears All Published Math Is Wrong (vice.com) 123
An anonymous reader quotes a report from Motherboard: Kevin Buzzard, a number theorist and professor of pure mathematics at Imperial College London, believes that it is time to create a new area of mathematics dedicated to the computerization of proofs. The greatest proofs have become so complex that practically no human on earth can understand all of their details, let alone verify them. He fears that many proofs widely considered to be true are wrong. Help is needed. What is a proof? A proof is a demonstration of the truth of a mathematical statement. By proving things and learning new techniques of proof, people gain an understanding of math, which then filters out into other fields.
To create a proof, begin with some definitions. For example, define a set of numbers such as the integers, all the whole numbers from minus infinity to positive infinity. Write this set as: ... , -2, -1, 0, 1, 2, ... Next, state a theorem, for example, that there is no largest integer. The proof then consists in the logical reasoning that shows the theorem to be true or false, in this case, true. The logical steps in the proof rely on other, prior truths, which have already been accepted and proven. For example, that the number 1 is less than 2. New proofs by professional mathematicians tend to rely on a whole host of prior results that have already been published and understood. But Buzzard says there are many cases where the prior proofs used to build new proofs are clearly not understood. For example, there are notable papers that openly cite unpublished work. This worries Buzzard. "I'm suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I've seen them wrong before," Buzzard told Motherboard while he was attending the 10th Interactive Theorem Proving conference in Portland, Oregon, where he gave the opening talk.
"I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."
To create a proof, begin with some definitions. For example, define a set of numbers such as the integers, all the whole numbers from minus infinity to positive infinity. Write this set as: ... , -2, -1, 0, 1, 2, ... Next, state a theorem, for example, that there is no largest integer. The proof then consists in the logical reasoning that shows the theorem to be true or false, in this case, true. The logical steps in the proof rely on other, prior truths, which have already been accepted and proven. For example, that the number 1 is less than 2. New proofs by professional mathematicians tend to rely on a whole host of prior results that have already been published and understood. But Buzzard says there are many cases where the prior proofs used to build new proofs are clearly not understood. For example, there are notable papers that openly cite unpublished work. This worries Buzzard. "I'm suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I've seen them wrong before," Buzzard told Motherboard while he was attending the 10th Interactive Theorem Proving conference in Portland, Oregon, where he gave the opening talk.
"I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."
Slashdot Poster Fears All Slashdot Headlines Are G (Score:3)
Wonderful (Score:5, Funny)
Re:Wonderful (Score:5, Funny)
You're right, though. G is entirely based on a speculative unit of measurement called "time", or more specifically, "the observable interval between two things happening".
There's no one who dares to ask the question "are we basing all of our physics understanding on something we can only observe and measure using the observation we just made?" And the answer is yes, yes we are. Every standard unit is based on time and time is based on "however long it takes a Cesium atom to do a thing". So if Cesium ever decides to act up, we're all fucked. The universe will literally stop behaving according to the laws of physics as we're used to it.
Re: (Score:2)
He is talking about Graham's Number [wikipedia.org]
Re: (Score:2)
Slashdot Poster Still Hasn't Learned to Check if Parent Was Replying to Self
Equally disturbing: (Score:2)
Comment removed (Score:5, Interesting)
Re: (Score:2)
" without ANY knowledge of the original pattern used to come up with what letter swaps with what."
Not quite, you must know the language of the original message otherwise your frequency analysis and cribs are worthless.
Re: (Score:3)
" without ANY knowledge of the original pattern used to come up with what letter swaps with what."
Not quite, you must know the language of the original message otherwise your frequency analysis and cribs are worthless.
Actually, no. Given a sufficiently-large corpus of enciphered material, nearly all wrong substitutions will produce an output that is approximately uniformly-distributed. A small subset of these, including the correct one, will generate non-uniform decipherments. It's then possible to examine the resulting candidate decipherments to identify the correct one even without a priori knowledge of the language, and without any cribs.
Re:Can’t agree with that. (Score:4, Informative)
Probably a good bit of cryptography depends upon certain things being true, for example.
Not really. The asymmetric primitives (public/private key primitives) we currently use are based on number theoretic constructs, but they are based on very old unsolved problems, none of which are provably unsolvable, as yet -- though we have proved that some of them are equivalent to one another in the sense that if we can solve one we can solve them all. The hash functions and symmetric primitives we use have little or no theoretical basis, and keep in mind that these are the actual workhorses of cryptography, the ones we apply to actual data, unlike the asymmetric primitives which are generally applied to symmetric keys or hash outputs. We have some ideas about what seems to make them strong (mixing, diffusion, substitution) and we know various ways of breaking them (e.g. linear cryptanalysis, differential cryptanalysis) so we make sure that they aren't vulnerable to those techniques. We test their vulnerability mostly by simplifying the primitives (e.g. reducing the number of rounds of repetition) until our techniques do break them. Then we view the differences between the simplified versions of our primitives that are broken and the full versions as the "margin of security".
There are a lot of security proofs in modern cryptography, but they don't depend much on the sophisticated mathematics Buzzard is talking about. And, actually, they aren't really "proofs". Or, rather, they're proofs but they don't prove what we want them to. Cryptographic proofs always begin with information-theoretic assumptions that are analogous to "assume a spherical cow...", in which complex and poorly-understood components are assumed to have idealized properties. The most common example is the Random Oracle model, in which a hash function is assumed to be a truly random function. This is obviously not true -- and note that I'm not saying cryptographers believe it to be true, they understand perfectly well the limitations of their proofs. It's useful because it's the closest we can get to an actual proof, and it does provide some level of confidence that we wouldn't otherwise have. But at the end of the day they're proofs about idealized things which aren't the ones we actually use.
It's not impossible that there's something in all of the uncertain work Buzzard is pointing that that could affect the security of our cryptosystems, but I think it's very unlikely. I think it's somewhat more likely that they could affect the security of the quantum-resistant primitives that are now being created, but that may just be because I'm looking at two piles of things and assuming they may be related because they share the property that I don't understand either of them.
I think the real problem is... (Score:1, Insightful)
... much of how we approach math is at too high a level of abtraction. AKA all of the 'math' we are doing is literally cognitive devices to help with nature's book-keeping when trading and trying to measure things in the real world. Abstract "proofs" are great. But I think the whole idea of "proof" is a sham in a sense, in the sense that we aren't "proving" anything, we've invented mathematical worlds... the base units of which these worlds are composed of are 'embedded' in the universe.
Science has disco
Re: I think the real problem is... (Score:2)
Funny, I was going to to on something" (Score:5, Funny)
> You are on to something for sure.
Almost what I was going to say. I was thinking "you are on zomething for sure."
Just out curiosity, are either or both of you stoned?
Re:I think the real problem is... (Score:5, Insightful)
Re:I think the real problem is... (Score:5, Insightful)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
All those other sciences orient around partitioning things into categories, different types, is or is not the organism with a disease, etc. Can you get enough data points to statistically usefully discern separation points through all dimensions of the many-variable hyperspace, obviously.
Re: (Score:2)
I go with simulations as being the best way to use math, simulating the real world, under specific variables rather than quick dirty math formulaes.
Thought as a simulation. With regard to the application of maths, the concept of treating all problems as simulations, rather than individual mathematical formulae, a series of mathemetical formulae working in conjunction to accurately represent that problem as a series of continuous calculation and altering the variables of those calculations to produce results
Re:I think the real problem is... (Score:5, Insightful)
What the hell has some wingnut conservative video of a random guy complaining about "reason" got to do with Science.
Science has made no such claim that 'we are already at too high a level of abstraction in our "normal" thought'. Thats absurdist nonsense that somehow manages to take an is-ought fallacy and mangle it into a tautology. Its the sort of talk educationally delayed antivax moms make, not scientists.
Nobody working with math thinks that maths is reality. That idea died with Pythagoras. But Abstraction is both a valid and historically useful tool for getting shit done.
And anyway this has nothing to do with what the actual issues raised which is that proofs have gotten too numerous and complicated. And thats not "abstraction", its just plain old Math
Re: (Score:2)
The demarcation between the concious and unconcious is extremely fuzzy and it doesn't matter anyway - if your brain comes up with an idea or thought its still YOU that came up with it whether conciously or not because you are your brain..
Re: (Score:2)
The demarcation between the concious and unconcious is extremely fuzzy and it doesn't matter anyway - if your brain comes up with an idea or thought its still YOU that came up with it whether conciously or not because you are your brain..
Except the whole point is that you don't understand your own thought processes, that's the whole point of most of your thought being unconscious. You're not literally interpreting the world 'as it is'. You're literally not interpreting or getting the same interpretation of a text as someone else. You think just because you've read the words, you understand what is happening in your brain, the fact is that you don't. That's what the science has shown, it's also why people can't understand one another be
Re: (Score:2)
Re: (Score:2, Funny)
That guy's an ass.
Whoa! That rudeness came from my subconscious. Don't blame me!
Re: (Score:2)
Re: (Score:2, Insightful)
medical data
Psychology is not medicine. It's pseudoscience at best. You can call it "psychological data" but do me a favor and don't call it "medical data".
Re:I think the real problem is... (Score:5, Informative)
AKA all of the 'math' we are doing is literally cognitive devices to help with nature's book-keeping when trading and trying to measure things in the real world.
It's more like the other way around, math is purely abstract models that may or may not be applicable in reality. Like 2 apples + 2 apples = 4 apples, but when people have sex they often find that 1+1=3. The flaw is that the physical act of reproduction doesn't map to the mathematical operation of addition. We can model having four apples as +4 and owing four apples as -4 and I can put my 4 apples in a 2*2 square so the square root of four is two but what's the square root of apples owed? Not meaningful, because I don't actually have any apples so I can't put them in a square. But if I wrote four notes saying "IOU: one apple" I could. Mathematicians simply define models, it's everyone else trying to map reality to a model.
We know there are multiple models that depend on the ground assumptions you make, for example in geometry you have hyperbolic, euclidian and elliptic [wikimedia.org] geometry depending on whether two lines connected at a right angle converge, diverge or stay parallel. All three are mathematically valid, consistent but mutually exclusive models. But in mathematics that doesn't mean one model is good and the other two are junk. It means that one model of geometry is appropriate if you have a flat piece of paper and a different model is appropriate if you approximate the surface of the earth as a 2D skin on a 3D orb. It's a toolbox, you still have to pick the right tool for the job.
Re: (Score:1)
When we need to sit down and try to observe correctly what our brain is actually doing first.
First, it's inverting the image...
Re: (Score:2)
Re: (Score:2)
I couldn't resist a chance for a Dr. Who meme. You may need to take your nerd test again :)
(And yes I know the image projected onto the retina is "upside down", that's high school physics/biology)
Re: (Score:2)
So you know how the brain has to manipulate the input data to conform to its own rules.
Yep. And did you know that if you blindfold a newborn baby and keep it blindfolded for the first few years of its life, when you remove the blindfold the child will be completely blind despite having eye that work perfectly well? Image flipping is trivial for our brains but it has to be "programmed" right at the beginning.
Re:I think the real problem is... (Score:4, Interesting)
"proofs" aren't really "proofs" but our misunderstanding of the truth
"Truth" doesn't actually exist. It is purely a philosophical debate. On this note, even facts are context sensitive and open to interpretation, meaning facts are subjective. In short, there is no true perfect notion of "correct".
You're even falling into your own trap. The word "proof" is an abstract concept in your mind with a seemingly absolute perfection definition of "this is true". Here's some food for thought. If the concept represented by the word "proof" requires absolute perfection like you're implying, and from practice and experience we know beyond a shadow of a doubt that humans are anything but perfect, then you might say that a "proof" is impossible. But wait... Did I just "prove" that a "proof" is impossible? I hope you will realize that at some point down this philosophical thought experiment of a rabbit hole that the notion of an "axiom" is a requirement to any form of reasoning. At some point, you just need to blindly accept something as "true" and build everything else on top of that.
In the end, you are limited by your senses. How do you know the world around you is even real? It really is just your brain trying to make sense of inputs. Even our senses are technically not "real". Color, sound, touch, are all abstract perceptions of raw information that we don't even have access to. I decided for myself that I will trust my perception and reasoning because it's the best that I have access to. In the end, the proof is in the pudding. Using math as an example, look at Calculus. Essentially fancy algebra with a notion of a "limit". Some abstract concept of approaching a number by getting finitely close to it, but never quite reaching it because the act of actually reaching it would defeat the purpose. Calculus is undeniably a powerful tool, yet definitely falls into a "purely abstract math".
Here's a fun thought experiment. How do you know the sky is blue? Prove it. You might want to start with the definition of blue. For example. Turns out the wway the brain interprets "blue" is by defining it as what it is not. Most humans have 3 primary colors that they can sense. RGB. B = !R&!G; G = !R&!B..... So even the "concrete" idea of blue is just an abstract relation to two other abstract concepts. Not to mention that it's circular reasoning. But it works, I can "see" blue.
For real reasoning exercising, look into set theory and the definition of numbers. Also fun topics are infinite sets of different sizes, and unbounded finite sets of different sizes. In general, set theory is awesome for these kinds of thought experiments and working with the abstract in a controlled manner.
Re: (Score:2)
As a minor issue, your "blue" argument is wrong. The typical human eye has red, green, and blue cones, "blue" is the sensation of a blue cone being stimulated. The sky can be shown to be bluish under normal cloudless sunny conditions by comparing a spectrogram of the sky against a spectrogram of the light bouncing off a white surface, and noting the relative magnitude of the blue portion of the spectrum in the sky spectrogram. Blue is not "the absence of green" combined with "the absence of red", because th
Re: (Score:2)
The typical human eye has red, green, and blue cones, "blue" is the sensation of a blue cone being stimulated
The eye does not see anything, it is a raw input. The concept of "sight" is purely abstract and what I described as "blue" is how the brain reasons about colors.
Axioms should not contradict each other
You can't define an axiom without another axiom. It's turtles all the way down. How can I prove an axiom does or does not contradict another axiom? Even something as "simple" as proving "1 equals 2 is false" cannot be "proven" except with with axioms of its domain and even then requires additional implicit philosophical assumption that our reasoning
Re: (Score:2)
"How do you know the sky is blue?" It isn't, I just looked outside and it's black, apart from a few tiny spots of more or less white.
Proof Engineering (Score:3)
Re: (Score:3)
Largest integer (Score:5, Interesting)
The set of integers has no largest element - if you sort by a binary operator that compares the numerical values.
You can list/sort the integers in other ways, in which case it's still the same set and there can be smallest and largest elements.
For example, sort the integers alphabetically by their spellings. In that case the first element is "eight", and the final element is "zero". All integers are accounted for, and can be found in-between the first and last element. Any two numbers can be individually compared to determine which goes "to the left" of the other.
You get lots of weird quirks like this when dealing with infinity - a lot of misunderstanding comes from assuming that infinity follows the rules expected of finite numbers.
(Start with "eight" followed by "eighteen" and "eighty" and "eight billion" and an infinite set of numbers beginning with the text "eight" which collectively has the same number of elements as all the integers... before you start with the next infinite set of numbers starting with eleven. Weird.)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That's silly. You're hashing members of an infinite set. You can do that to anything. It's like saying that the population of the Earth is 1 because we're all people.
Re: (Score:2)
not all integers have spellable names that could be usefully sorted alphabetically.
Sure there are, if you allow powers of 10 ("...times ten raised to the power of n") as part of the official name (a.k.a. scientific notation).
And yes, that would be recursive.
Re: (Score:2)
You do know that scientific notation is never intentionally used with integers. If you can write 123456789 its always longer to say 1.23456789 times 10 to the power of 8.
Re: (Score:2)
Re: (Score:2)
Yes, I agree, scientific notation is used to represent floating point numbers, and often in situations where the required precision is limited. I am fully aware of that thanks. However if we are (as we are here) dealing in integers, then 120000000 != 123456789 so representing it as 1.2 x 10^8 is not helpful.
For example 6.626 × 10-34 is a good and useful approximation to planck's constant for many purposes, however 1.3 x 10+8 is never a useful representation of the 100,008th prime number.
Re: (Score:2)
Re: (Score:2)
Okay, by that reasoning, what number comes immediately before "one"?
You shall quickly discover that this is an unanswerable question unless you limit the range of integers to be chosen.
As another commenter has pointed out, technically the set of things organized in this way is not even the set of integers any more... it just happens to be countably infinite in size as the set of integers is.
Re: (Score:2)
That brings up the question of whether sorting requires that an immediately preceding number be provided for every given number, or whether it suffices that for every pair of numbers there is a unique ordering.
Alphabetical ordering of numbers does present problems. It's language dependent. Specific naming rules must be used: choose either "thousand thousand" or "million"; the sort order differs depending on which you choose.
Re: (Score:2)
"Useful" is a concept foreign to pure mathematics.If something mathematical is real-world useful, it's applied mathematics.
Re: (Score:2)
I meant "useful" in the sense that there are no unique names for many integers, and the only way to identify them would be by their individual digits, which isn't particularly useful because it would result in many distinct elements after the first having no uniquely defined predecessor.
A little overhyped (Score:5, Insightful)
While it is nice to see an article discussing real issues in mathematics, it is a little sensationalist in places. While formalized proofs amenable to computer proof checkers are definitely a good thing, it isn't really the case that math is going to fall apart or turn out to be wrong any time soon. For starters a surprisingly large amount of mathematics has actually been fully formalized to that level: check out metamath [metamath.org] for example. Second, while complex theorems at the edge of current research may potentially have an issue (it was issues with some of his proofs in very abstract homotopy theory and motives that made Voevodsky a convert to computer formalizable proofs), the vast majority of math is used routinely enough by enough mathematicians (and verified by students as exercises) that errors are at this point vanishingly unlikely. Finally, if by some bizarre circumstance something did go wrong, the result would be a patching of axioms assumptions to make most of the mathematics we have true again -- at the end of the day math has to be useful; as Russell said in Introduction Mathematical Philosophy:
The practical applications of mathematics to science and engineering have been so rich and so effective that any new "corrected" theory of mathematics that didn't recapitulate the same results would simply be ignored or rejected. We would instead modify axioms, or logic itself until we could regenerate much of the mathematics we have.
Re: (Score:3)
He specifically said it's unlikely that all of math, or even a large important part of it ("great castles"), is wrong.
Re:A little overhyped (Score:4, Insightful)
More than a little overhyped, but if the headline were "Keynote speaker at Interactive Theorem Proving conference thinks that Interaction Theorem Proving is worth doing" then people would see it for the non-story it is.
Re: (Score:2)
Also worth noting that the idea of mathematical perfection took a serious hit with Godel's incompleteness theorems.
So yeah, patching things to make math more useful is not really an issue.
He needs to define his terms accurately (Score:1)
The statement, "I'm suddenly concerned that all of published math is wrong" is ill-defined.
Until "all of published math" is defined properly his argument is flawed and meaningless. This possibly undermines all of his published arguments.
Re: (Score:3)
Except that was the editor; and the author said, "some", not, "all".
Pretty big difference in meaning, in general, let alone a story about math. Have I mentioned how much I dislike people who speak primarily in hyperbolic terms?
Good news or bad news (Score:1)
How can you write code if you're not sure? (Score:3)
Re: (Score:3)
Read Isaac Asimov's story "A Feeling of Power".
Re: (Score:2)
I read that story many years ago as a teenager and it really left a mark. I never could remember the title, however. Thanks so much for posting!
Re: (Score:2)
All you need is a small program that implements the rules of math and verifies that the steps in the proofs follow from those rules, one step at a time. The program would be the same regardless of the math complexity.
Fo
Re: (Score:2)
You're still open to errors caused by the runtime environment generating an invalid proof while executing a correct program.
Memory error correction may help, but only to some extent.
Re: (Score:2)
Then run it through the dozen independently-written verifiers I mentioned (in different computer languages) on many OSs and different physical computers. Insert artificial errors in the proofs and make sure they are detected. You have the algorithm and the source code; walk it through a few proofs by hand until you're convinced it does the right thing.
Seriously, at what point wou
Re: (Score:2)
walk it through a few proofs by hand until you're convinced it does the right thing.
Isn't that what the article was arguing for? The point is that, to trust automated provers to be doing the right thing, you need to understand how they work.
If the total computer proof is so large, complex and low-level that you don't understand the large structural concepts of what the proof is trying to achieve, you'll never be sure that the the consequence is derived from the premises, even if each of the low-level steps
A small number of operators, used many times (Score:2)
Suppose I ask you to manually verify the result of:
1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 ... + 2087 + 2089 + 2091 =
It's very likely you'll make an error while doing those thousand additions.
However, it is very easy to write computer code that adds correctly - a thousand times or a million times. It's fairly easy to prove that it adds correctly.
The computer need only perform the individual operations correctly. Once you know that it handles each symbol correctly, you can string together however many symbols you
Re: (Score:2)
The sum of any finite length arithmetic series can be trivially calculated by hand without having to do any of the individual additions, but instead just replacing all of the additions with a single multiplication, which can all be done by hand, and therefore done manually.
Without even taking out a piece of paper to calculate it exactly, I can visually recognize that the sum of that particular arithmetic series is the square of 1046. If I then calculated that particular result by hand, how would that n
Re: (Score:1)
Yea you could prove it, but you didn't. The entire point is people are relying on similar things (obviously more complicated) that have 'been proven' before, but maybe there is some doubt if they have actually 'been proven' correctly.
Re: (Score:2)
I didn't prove it because I didn't feel like wasting my time proving something that has already been proved and shown to be part of not just mathematics, but even simple arithmetic. Honestly, if I had such an inclination at the time, I would have also done the multiplication... which I didn't do because I didn't think it was necessary.
However, if you insist upon a proof that the sum of a finite arithmetic series can be replaced by a multiplication then consider the following (I shall use the * to symbol
Re: (Score:2)
Not trying to be snarky or anything and appreciate your effort. But I think you missed the point. I wasn't asking for the proof. And obviously knew it existed. Just trying to show that people assume all the proofs are verified actual proofs without checking. The guy from the article has some doubts.
Great, you've done the first one for them. Now do the rest. Let me know when you get to here.
The greatest proofs have become so complex that practically no human on earth can understand all of their details, let alone verify them.
At some point the proofs get too complicated for most people to understand. A computer program designed to verify thos
Re: (Score:2)
The fact that I didn't initially present the proof did not alter the veracity of the original statement... the original comment suggested that the actual sum of a particular series of integers could be impractical to manually verify because of the sheer number of them.
But that particular series possessed a provable quality, that of being an arithmetic series, that made simplification of the sum to multiplication possible. I knew this principle, and exploited it. It was no less true before I offered the
Re: (Score:2)
You mean in the same way you can trust a theorem which enables you to simplify repeated addition to multiplication?
I'd argue that computers are far more of a black box than the theorem I happened to exploit. Wanna try detailing the entire logic, down to the last signal line, for the ALU in your computer? Because if you don't, how is that any better than using a theorem or law that you are simply accepti
Re: (Score:3, Informative)
FYI Gauss found out when he was 10 that it is easy to create a formula for this type of question
see e.g. http://mathcentral.uregina.ca/... [uregina.ca]
The real challenge is how to communicate the mathematical reasoning behind this formula so that it is "believed to be correct beyond doubt".
What does it mean that a mathematical reasoning is "correct" or "true" ? Then you enter the field of philosophy of mathematics, formal logic and formal theorem proving.
Re: (Score:2)
IMHO, any statement that includes "it is very easy to write computer code that" is generally wrong. I am old enough that when I was studying computer science, it included operational research. That included investigating code that performed matrix operations like inverting. Its is very very hard to produce an algorithm that gives the correct answer for such a simple (seemingly) problem for all input numbers with consistent accuracy.
I suppose if you don't know how. Daughter says (Score:2)
> IMHO, any statement that includes "it is very easy to write computer code that" is generally wrong.
I suppose anything is hard if you don't know how, and many things are easy if you do know how. My daughter informs me that it is hard to consistently write numerals facing the right way - to avoid writing E when she means 3. That's hard for her right now, harder than reading Hop on Pop.
For those skilled in the art, for the people who *would* write provable code, writing provable code for "plus" is easy.
Math is a Game (Score:5, Insightful)
ALL mathematics is a game; make up some rules, and see where that leads you.
SOME mathematics turns out to be useful in describing the real world. These are the numbers that engineers use.
Some mathematics do NOT describe anything useful in the real world. And some mathematics that USED to be "not useful" suddenly BECOMES useful when our understanding of things evolves. For example, "imaginary" numbers were a mathematical trick, just a game you could play with numbers - until Maxwell discovered that you could describe electromagnetic radiation using "imaginary:" numbers.
Some branches of math are still just a playground - and will remain so until we get off the Earth and out of the Sun's gravity well.
Re: (Score:2)
Or if you are finding all of the real roots of an arbitrary cubic polynomial happens to have three distinct roots
Re:Math is a Game (Score:4, Informative)
Quantum theory needed a way of sorting out the mess of particles they had discovered ... the mathematicians replied - Group Theory - here is the definitive work on it ... it was perfect ...
It had found no use outside mathematics since 1832 ... but they had worked on it anyway, so when it was found useful it was a fully formed theory
Re: (Score:2)
The good news is (Score:3)
The Sky is Falling...again (Score:1)
They're neither! (Score:1)
Gödel's incompleteness theorem showed that.
All mathematical "proofs" are based on basic axioms (like Zermelo-Fraenkel set theory), that themselves can not be "proven".
Which means that unless they are backed up by actual (statistically reliable) observation in the real world, it's all just a flight of fancy which as much justification to call itself science as religious beliefs.
Because in the real world, there is no such thing as "proof". Statistically reliable observation of theory predictions is the b
Re: (Score:2)
This.
Math is a religion [quoracdn.net]
Been there, done that - sort of. (Score:2)
This was more Computer Science than Math, but I'm throwing it out there as it sounds similar ...
I was an undergraduate research assistant from 1985-87 working for a professor at ODU, Dr. Chris Wild, (who is now deceased) who was studying automated programming and proof techniques for abstract data types (eg: stacks, queues, etc...) under a NASA grant (NAG 1-439). He was investigating methods for automatically generating code and proof of correctness from formal specifications. I implemented his ADT idea
Re: (Score:2)
Goodness, that brought back memories...of programming computational linguistics stuff in LISP and Prolog in that same timeframe, on some of those same machines. I could probably find some of our TRs, although I doubt they're on the web.
Not far from truth (Score:2)
Most often, proofs (that rely on case analysis) focus only on the important cases. The omitted cases are said to be obvious. Sometimes they are not.
Another class of mistakes are your usual off by one errors that are common i
Automated Theorem Proving is not new (Score:3)
This is not any sort of new idea. "Automated theorem proving" is a well established field. Just as an example, I worked briefly as a post-doc in Alan Bundy's group in Edinburgh in the early 1990s, creating systems that could automatically process proofs [springer.com]. The whole group was focused on automating the process of constructing and checking proofs, and they certainly weren't the only ones.
TFA refers to a relatively new framework (LEAN [github.io]) that lets people enter proofs into a system. This system doesn't really bring anything new to the table, beyond perhaps a better UI. These systems work well enough for smaller proofs, but complex proofs are still constructed by fallible humans. Progress in this area is slow, because it's an incredibly hard problem.
That mathematician can s*** my COQ... (Score:2)
...or agda, or idris, or isabelle/hol.
If they use a programming language or theorem prover, their proofs will be machine verifiable.
We won't always understand why they are correct, especially as these systems incorporate better and better tactics, but we'll know they are. (This problem will also happen when we off-load proving stuff to AI. We'll know it's fine, just not why.)
The problem of understanding code from that point on will become a problem of code readability, and we do know how to deal with that i
Non-zero chance (Score:1)
Two types of scientists (Score:5, Interesting)
There are two types of scientists.
The first type sees science as a walled area. Once an article passes peer review and is published it has proven its merit and they trust it.
The second type sees publication as just one (big) step forward in credibility. These are the old style fundamentalist scientists.
I like the second type much more but it also takes a lot more work. The first category can work much faster, and that is very nice if the material they rely on is good. It also has the schemers who try to interfere with publication of articles they don't like.
Holy sh*t. All published math is wrong? (Score:2)
Re: (Score:1)
Fermat / Vice (Score:5, Insightful)
Absolutely nobody thinks Andrew Wiles's proof is the one Fermat had in mind, yet people act like the Fermat work is done. Proofs are given more weight than understanding and that infects physics as well.
Also, stop linking to the clickbait horseshit at Vice and use real sources.
Baker Fears (Score:1)
Slashdot as misleading headline testbed? (Score:3)
ALL published math? (Score:5, Informative)
Clicking on the Vice article the subtitle reads:
And this has a completely different ring to it. And makes the statement a lot more plausible.
Because could anyone really provide proof that ALL published math is wrong?
Thinking back at the very first math lectures at the university where we started with set theory, we were shown to be very very careful with absolute statements like "all" or "none". Because all that is required to invalidate such a statement is a single example to the contrary.
So which is it? (Score:2)
Is all published math wrong?
"I'm suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I've seen them wrong before,"
or is it just a small subset?
"I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."
Re: (Score:2)
Re: (Score:2)
imaginary
Not exactly something new (Score:2)
Theoretical (Score:2)