Forgot your password?
typodupeerror
Math Science Technology

Miscalculation Invalidates LHC Safety Assurances 684

Posted by timothy
from the philosophy-of-science dept.
KentuckyFC writes "In a truly frightening study, physicists at the University of Oxford have identified a massive miscalculation that makes the LHC safety assurances more or less invalid (abstract). The focus of their work is not the safety of particle accelerators per se but the chances of any particular scientific argument being wrong. 'If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect,' say the team. That has serious implications for the LHC, which some people worry could generate black holes that will swallow the planet. Nobody at CERN has put a figure on the chances of the LHC destroying the planet. One study simply said: 'there is no risk of any significance whatsoever from such black holes.' The danger is that this thinking could be entirely flawed, but what are the chances of this? The Oxford team say that roughly one in a thousand scientific papers have to be withdrawn because of errors but generously suppose that in particle physics, the rate is one in 10,000."
This discussion has been archived. No new comments can be posted.

Miscalculation Invalidates LHC Safety Assurances

Comments Filter:
  • Voodoo Science (Score:5, Insightful)

    by alain94040 (785132) * on Wednesday January 28, 2009 @07:07PM (#26646369) Homepage

    This is voodoo science. And I don't mean the LHC experiments.

    I mean the TFA that in essence claims that because an expert may be wrong, any probability the expert assigns to a risk can be ignored and inflated by as much you feel like it. Talk about bias.

    --
    The 5 Steps to a Great Startup Idea [fairsoftware.net]

  • Grandstanding.. (Score:1, Insightful)

    by Matheus (586080) on Wednesday January 28, 2009 @07:14PM (#26646463) Homepage

    It's not like they actually showed an error in the calculations or showed any proof of danger.

    This is a bunch of bored brains saying that on the basis of pure statistics: If one in 10,000 papers have an error in them then the probability of this paper having an error in it is 1 in 10,000 ergo any claim must be degraded by 10^-4?

    They should be spending their valuable time actually checking the facts and figures and coming up with some real conclusions not some abstract theory on the reliability of scientific calculations..

  • by A Commentor (459578) on Wednesday January 28, 2009 @07:18PM (#26646517) Homepage
    Either they are right (the LHC is safe), and nothing happens. Or they are wrong, and no one is left to say anything about them being wrong.... ;-)
  • by zappepcs (820751) on Wednesday January 28, 2009 @07:20PM (#26646547) Journal

    The purpose of the LHC is noble, and results could be what we need to get off this rock and really dominate the galaxy. If they destroy the Earth... meh, it was a good try. Maybe next time.

  • by Werthless5 (1116649) on Wednesday January 28, 2009 @07:22PM (#26646579)

    Means that there is a much greater than zero probability? Sorry, either the paper is wrong or your interpretation of it is wrong. Publishing a probability is not a determination of that probability.

    There is no published figure regarding the probability of your computer turning into chocolate pudding before it reaches warranty. The probability is still approximately zero despite that.

    The probability of a black hole at the LHC swallowing the Earth is approximately zero, and it doesn't matter how many sensationalist journalists try to misconstrue real science in an effort to drum up sales.

  • Re:Voodoo Science (Score:5, Insightful)

    by caffeinemessiah (918089) on Wednesday January 28, 2009 @07:22PM (#26646603) Journal

    This is voodoo science. And I don't mean the LHC experiments.

    It's not science, it's just probability. It's senseless to try to assess any statistical estimates *themselves* based on Physics, just the probability that they could be wrong based on some very broad assumptions. Specifically, any estimate is arrived at by a chain (rather, DAG) of logic. What you CAN estimate is the probability that any Physics-oriented estimate is based on incorrect assumptions, by (presumably) analyzing that chain of reasoning down to first principles and assuming that a "logic error" might have been made at any point. I hope that the authors aren't taking it further than this, in which case, this is statistical masturbation.

  • by bluefoxlucid (723572) on Wednesday January 28, 2009 @07:25PM (#26646641) Journal

    Opponent: Oh crap, you're whacking things together, it could destroy the earth, crazy scary technology we don't understand!

    Proponent: That could never happen.

    Opponent: OMG yes it could you don't know wtf you only have studied this shit your whole life you're not a sane normal rational person like the boys in Alabama!

    Proponent: Look, we've done tons of calculations; we've compared this against real-world natural occurrences; we've considered the number of times the conditions we've come up with have occurred in our lifetimes, and it's huge. We're just scaling it down to a laboratory level so we can observe it in a controlled environment. It can't break anything.

    Opponent: BUT YOU COULD BE WRONG!!!!

  • Re:Voodoo Science (Score:5, Insightful)

    by Bob-taro (996889) on Wednesday January 28, 2009 @07:27PM (#26646669)

    I mean the TFA that in essence claims that because an expert may be wrong, any probability the expert assigns to a risk can be ignored and inflated by as much you feel like it. Talk about bias.

    Bias? Hype, maybe. Actually, this does make some sense, IMO. Say I was offering to shoot an apple off the top of your head and I told you I'd calculated there was only a 1 in 1 million chance of the bullet hitting you instead. Now if you knew (somehow) that there was a 1 in 10 chance I'd gotten the calculation wrong, you're going to look at it as more of a 1 in 10 chance of getting hit ... or at least way more than one in 1 million.

  • Re:Voodoo Science (Score:5, Insightful)

    by orclevegam (940336) on Wednesday January 28, 2009 @07:32PM (#26646737) Journal
    Essentially their argument boils down to because people make mistakes and we can calculate the odds of them making a mistake, if they calculate the odds of something and it's greater than the odds of them having made a mistake then you have to use the odds of them making a mistake as the probability of the event happening. Of course this reasoning is total bullshit, and just the sort of abuse statistics gets a bad name for. By that sort of reasoning we should all go play the lotto as clearly the odds of someone miscalculating the chances of winning the lottery are much better than the calculated odds of winning, never mind the fact that even if they made a mistake in calculating the odds it wouldn't shift the calculation enough either way to get it anywhere near the odds of them having made a mistake.
  • by aliquis (678370) <dospam@gmail.com> on Wednesday January 28, 2009 @07:33PM (#26646743) Homepage

    I don't see the problem, facts:

    1) We will all die some day.
    2) The solar system will stop working some day.

    So what's the problem? Sure it may kill us and all life on the planet, but does it really matter? We're screwed anyway.

  • Re:Voodoo Science (Score:5, Insightful)

    by bhagwad (1426855) on Wednesday January 28, 2009 @07:39PM (#26646837) Homepage
    I agree. Just look at this statement: "The focus of their work is not the safety of particle accelerators per se but the chances of any particular scientific argument being wrong." Can you get any broader than that? What they're essentially saying is that anything can be wrong - Including their own paper.
  • by Anonymous Coward on Wednesday January 28, 2009 @07:40PM (#26646855)

    People are. Nobody who actually knows something about the subject is debating evolution.

  • Re:Voodoo Science (Score:2, Insightful)

    by Dastardly (4204) on Wednesday January 28, 2009 @07:44PM (#26646905)

    Actually, the LHC creates a set of circumstances that happens all the time. It just doesn't happen if front of very sensitive particle detectors at a very high rate. So, the LHC was built to replicate events that happen all the time in front of sensitive instrumentation.

    So, yes, the LHC calculations could be somewhat off, but we have observations (not calculations) of events with much higher energies than the LHC can reach with cosmic rays hitting the earth's atmosphere and we are all still here. Jupiter is much bigger, so many more of those events occur on Jupiter. The sun is even bigger and many more high energy events occur for cosmic rays hitting the sun.

    The calculations for LHC safety for micro black holes come from trying to put a number on the probability that if these events can destroy a planet by creating a black hole what is the probability that the Sun, Jupiter or any other planet in the Solar System would still exist given the number of high energy LHC-like events that have occurred over the last 4.5 billion years. The probability must be incredibly small, what the LHC calculations do is put a value to incredibly small.

  • by Thiez (1281866) on Wednesday January 28, 2009 @07:45PM (#26646929)

    > I STILL don't think the LHC will kill us all but the fact we're debating it says something.

    Yes, it says that people are easily scared by things they do not understand. See also: wireless, mobile phones, things that have a 'chemical' smell... Ask some random people what would happen if the sun were to be replaced instantaneously by a black hole with a mass equal to that of the sun (moving in the same direction as the sun with the same speed, etc). Most people will reply that the earth would get 'sucked' in the black hole... if you don't even understand gravity you have no place in a debate concerning the LHC.

    Everyone is entitled to an _informed_ opinion.

  • Sensationalist BS (Score:5, Insightful)

    by ceoyoyo (59147) on Wednesday January 28, 2009 @07:46PM (#26646939)

    The article is a pile of BS topped by a sensationalist (and completely wrong) headline. The paper abstract is interesting, but that's it.

    Essentially the blog article makes the jump from 1 in 1000 papers being withdrawn because of "an error", any error, to the idea that the safety of the LHC is "invalid" due to a "massive miscalculation."

    How can a hypothetical miscalculation be "massive?" Anyway, you can't just take an average retraction rate for papers and assume it applies to anything you like. The arguments for the LHC being safe are based on well established science. That is, for the LHC to destroy the world not only would ONE paper have to be wrong, but a LOT of papers would have to be wrong, and all in the same direction.

  • Re:Voodoo Science (Score:5, Insightful)

    by KagatoLNX (141673) <kagato&souja,net> on Wednesday January 28, 2009 @07:48PM (#26646969) Homepage

    Actually, this isn't that much voodoo.

    It's just saying that, if someone has a 1/10,000 chance of being wrong, their assurance that there is a 1/1,000,000,000 chance of something isn't that good of a bet. In other words, if you want the latter level of certainty, you don't really have it, because of the fallibility of the research itself.

    This is actually rather obvious. If Jimbo tells you that there's a 1% chance that your tire will go flat if you don't fix it, that's not 1% if Jimbo is wrong 50% of the time. At best, it's 50.5%. Or something like that.

    Assuming his brother Jethro is just as bad (but uncorrelated) with him, then their dual recommendation that it will go flat only gets you 25.25% certainty, not 1% (or 0.01%). The numbers may not be exactly right (my stats are rusty), but you get the point.

    Basically, they're saying that the research provides a wider error bound than it may claim, assuming that scientists uniformly make logical mistakes--which they very probably do.

    The implication, then, is that the LHC estimates should be independently done by other teams. This is, well, the basis of the scientific method, so essentially this study provides a statistical analysis of what we already know--after enough work, science gets results. Of course, the base theories assumed by all of the researchers could be wrong, which would be unfortunate, but the LHC is going to nail that one pretty quickly. :)

    This is not surprising, but not voodoo either.

  • Re:Voodoo Science (Score:2, Insightful)

    by Imagix (695350) on Wednesday January 28, 2009 @07:48PM (#26646973)

    I mean the TFA that in essence claims that because an expert may be wrong, any probability the expert assigns to a risk can be ignored and inflated by as much you feel like it. Talk about bias.

    Bias? Hype, maybe. Actually, this does make some sense, IMO. Say I was offering to shoot an apple off the top of your head and I told you I'd calculated there was only a 1 in 1 million chance of the bullet hitting you instead. Now if you knew (somehow) that there was a 1 in 10 chance I'd gotten the calculation wrong, you're going to look at it as more of a 1 in 10 chance of getting hit ... or at least way more than one in 1 million.

    Not necessarily. That's only a 1 in 10 chance that I'd gotten the calculation wrong. What's the probability that the "error" that I made meant that the probability of you getting hit is now 1 in 10 million?

  • by Anonymous Coward on Wednesday January 28, 2009 @07:54PM (#26647047)

    Anonymous bystander: If proponent is wrong, we won't live long enough to know. If he's right, opponent is labeled a dummy forever.

  • Re:Voodoo Science (Score:3, Insightful)

    by WarJolt (990309) on Wednesday January 28, 2009 @07:58PM (#26647077)

    Particle collisions happen in nature.
    If we could that easily blink ourselves out of existence then we'd see planets disappearing all the time and black holes would be everywhere.

  • Re:Voodoo Science (Score:5, Insightful)

    by IorDMUX (870522) <mark DOT zimmerman3 AT gmail DOT com> on Wednesday January 28, 2009 @08:09PM (#26647193) Homepage

    then you have to use the odds of them making a mistake as the probability of the event happening.

    This isn't what the actual study states, though the summary seems to hint that way. To quote from one-of-the-FA's:

    Which means we are left with the possibility that their argument is wrong which Ord reckons conservatively to be about 10^-4, meaning that out of a sample of 10,000 independent arguments of similar apparent merit, one would have a serious error.
    Of course, this doesn't mean that the LHC is dangerous, only that there is no reasonable assurance of safety which, as Mark Buchanan writing in New Scientist this week says, is not the same thing at all.

    To sum it up, they say that if a researcher predicts an occurrence rate for an event that is less than the researcher's own error rate, then the occurrence rate remains unknown ('cannot be assured')... not that it is equal to the researcher's error rate.

  • awesome logic (Score:5, Insightful)

    by j0nb0y (107699) <jonboy300@yahooCOUGAR.com minus cat> on Wednesday January 28, 2009 @08:15PM (#26647285) Homepage

    paranoid person: The LHC is going to cause a black hole!
    scientist: No, the LHC is not going to cause a black hole.
    paranoid person: The chances of a scientist being wrong is 10%, therefore there is a 10% chance that the LHC will cause a black hole!

  • Re:Voodoo Science (Score:4, Insightful)

    by orclevegam (940336) on Wednesday January 28, 2009 @08:18PM (#26647321) Journal
    To use your example, whether or not Jimbo is wrong 50% of the time does not make the odds 50.5%, as what your changing is the uncertainty, not the probability. Jimbos ability or lack thereof to calculate a probability has no impact on the actual outcome of the probability, just the likelihood that said probability is correct (or not). I'm sure the level of certainty in those calculations is already listed, and they might have a point if they tried to claim that the level of uncertainty for the calculations should factor in the probability that the paper(s) it's based on are incorrect, but the way the article is written (and the even more inflammatory summary) makes it sound like they are arguing that the calculated probability of the event should be changed.
  • Meaningless Math (Score:3, Insightful)

    by champion.p (1462707) on Wednesday January 28, 2009 @08:26PM (#26647397)
    Well... sort of. In fact, you make the same mistake that the authors appear to in your logic.

    If Jimbo tells you that there's a 1% chance that your tire will go flat if you don't fix it, that's not 1% if Jimbo is wrong 50% of the time. At best, it's 50.5%.

    But you assume that Jimbo's being wrong means that the probability of failure is 100%! It's not necessarily. In fact, Jimbo might be wrong in that the probability of a flat tire is actually 0% -- in which case, his being wrong has helped you. If this is the case, then the total probability is 0.5%, much better than 1%. This is the best case; 50.5% is the worst case, and neither is "more likely", because we don't know what the conditional probabilities are. It's this fallacious reasoning -- that if the theory is wrong, the probability of the event must be greater -- that make this article technically true, but useless. We cannot handpick these probabilities. From the TFA [arxiv.org] (not the abstract):

    The other unknown term in equation (1), P(X|not A) [read: the probability of the catastrophe given we're wrong], is generally even more difficult to evaluate, but lets suppose that in the current example, we think it highly unlikely that the event will occur even if the argument is not sound, and that we also treat this probability as one in a thousand.

    (emphasis and comment mine). I disagree. This probability is impossible to evaluate, and so this paper means nothing.

  • Re:Voodoo Science (Score:3, Insightful)

    by qeveren (318805) on Wednesday January 28, 2009 @08:29PM (#26647447)

    A black hole can form in any region where the energy density is greater than a certain threshold (which is a function of the total energy involved). As the amount of energy (or mass) involved increases, the more relaxed this threshold becomes.

    For example, if one were to fill the solar system with air (at sea level density, 1.2 kg/m^3) out to about 77 AU, it would be a black hole. For the Sun's mass to become a black hole, it would need to be much more dense, by about 15 million trillion times.

    For the relatively small amounts of energy involved in LHC collisions the density needed to form a black hole is staggeringly enormous, but still not impossible to reach. Of course, even if a black hole did form, Hawking radiation would destroy it pretty much instantaneously.

    When it comes right down to it, though, the odds of creating a dangerous black hole is effectively zero, as evidenced by the fact that the various bodies of the solar system aren't black holes.

  • by vyrus128 (747164) <gwillen@nerdnet.org> on Wednesday January 28, 2009 @08:39PM (#26647557) Homepage

    Seriously, nothing to see here. This is truly an embarrassment to Slashdot (if that's even possible). Just move along.

  • Voodoo posting (Score:5, Insightful)

    by Burning1 (204959) on Wednesday January 28, 2009 @08:42PM (#26647597) Homepage

    This is actually rather obvious. If Jimbo tells you that there's a 1% chance that your tire will go flat if you don't fix it, that's not 1% if Jimbo is wrong 50% of the time. At best, it's 50.5%. Or something like that.

    Okay seriously?

    The probability that Jimbo is wrong is unrelated to the probability of your tire failing. If jimbo says that you have a 1% chance of your tire failing, but there's a 50% chance that jimbo is wrong we can reach the following conclusion: There is a 50% chance that your tire has a 1% chance of failing. There is a 50% chance that your tire has some other probability of failing. Some other probability of failing includes values such as 0%, .5%, and 2%. It also includes a 100% probability of your tire failing.

    However, we have to assume that Jim isn't pulling the 1% figure out of his ass. If your tire was 100% likely to fail, we can still assume that Jim based his statement on a reasonable analysis. Perhaps Jim didn't notice a nail in your tire, but without knowing the quality of Jim's inspection of your tire, or without having access information Jim doesn't have, it's hard to say that he has a 50% chance of being wrong.

    Finally, in some cases a professional will include a certain amount of leeway in his figure. Chances are, Jim fully inspected the tire and doesn't see any reason why it would fail prematurely. Chances are, that 1% is left as wiggle room in case of invisible manufacturing defect or a mistake in his evaluation. In this case, Jim has already factored into his evaluation the chances that he's incorrect.

  • The real risk (Score:4, Insightful)

    by TopSpin (753) * on Wednesday January 28, 2009 @08:48PM (#26647657) Journal

    If CERN leaves the window open long enough by failing to produce real collisions in the LHC that don't destroy the planet the alarmists WILL achieve their goals and get it shut down. Have no doubt. Politicians of all stripes thrive on alarmist nonsense. This 'story' is exactly the sort of double-speak that can lend just enough credibility to the alarmist argument to get the ball rolling.

  • Re:Voodoo Science (Score:2, Insightful)

    by Anonymous Coward on Wednesday January 28, 2009 @08:49PM (#26647673)
    Define 'independent study'. Since actually getting a study published involves piercing a significant layer of orthodoxy and political-correctness I have to doubt your formula. Global Warming, for instance, is the premiere example of the political establishment of scientific truth, with or without the evidence. When the government will only fund studies that seek to establish the conclusion they want to hear and scientists will only propose studies they think will receive funding, the result is a circular feedback loop, a bureaucratic tautology which proves nothing and serves only to make one opinion/conclusion on the subject viable and to discredit other opinions and studies. In our day the government plays the role of political-patron that the church did a few hundred years ago when it was inexorably tied to political power in Europe. Thus, if Galileo were proposing his heliocentric theory in our day and age he wouldn't be burned at the stake, he'd just receive no funding, be sidelines in scientific circles, relegated to teaching and not researching, and then marginalized politically. I'm not sure which fate is worse. At least burning people at the stake is obviously and unambiguously wrong.
  • Re:Voodoo Science (Score:5, Insightful)

    by Artraze (600366) on Wednesday January 28, 2009 @08:49PM (#26647683)

    If you took the ten seconds needed to read the abstract, you'd clearly see it's the former:

    "... If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it."

    In other words, since the upper bounds of a catastrophic outcome is a least the probability that they were wrong, it's important to estimate the missing factor.

    Of course, the problem underlying this is the fact that if one _could_ calculate the missing factor, it wouldn't be an issue. In the case of the LHC, it is (probably :P) far more likely that the world would be destroyed by some yet-unknown physics (e.g. "the doctor" from Ender's Game) than by black holes. But, since it's impossible to predict the likelihood of something we don't know anything about, at some point one just has to throw the switch and see what happens.

    Bad journalism, solid (enough) science. As always...

  • Re:Voodoo Science (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 28, 2009 @08:56PM (#26647739)

    Actually the point the article makes is not that there is a 1/1000 chance that the LHC will destroy the world but rather it is meaningless to say that the odds are as small as they safety reports etc say because the chance of the reports being wrong is greater than their predictions.
     
      It basically boils down to saying that the scientists are saying there is a one in a billion chance that the LHC is dangerous then turning round and saying that there is a 1/1000 chance that that figure is wrong. Basically the point is that neither statistic is very helpful. Since the 2nd invalidates the first but tells you nothing about the actual probability of a dangerous event.

  • by Anonymous Coward on Wednesday January 28, 2009 @09:04PM (#26647837)

    With all this uncertainty, it does however highlight two certainties.

    First, they have proved they can make mistakes. (While this should be obvious, it is however so often assumed that as they are the best of us, then they must know what they are doing).

    Second, it proves they do not know precisely what they are doing. (Again this should be obvious, (as there would be no point in building the LHC, if they knew precisely what was going to happen). But it again highlights how its assumed they do know what they are doing, when in fact they cannot know).

    This doesn't prove the LHC is dangerous, but it does prove they cannot prove the LHC isn't dangerous.

    At the same time, we have theories which can show possible dangers. Now possible doesn't mean probable, but it also doesn't mean impossible.

    Even the argument about atmospheric collisions is flawed, as the set of conditions inside the LHC is different to in the atmosphere. For example atmospheric collisions are very unlikely to have any chance of many Higgs Bosons in collision with each other whereas in the LHC it is possible, and thats just one example difference. Also we have no idea how multiple Higgs Bosons will behave or decay in groups or if it will allow them to interact or merge with other particles and how continuing collisions would affect them).

    I don't believe they would ever stop these experiments, as too many people involved with the science (and the money behind the LHC) have such intense desire to learn from the experiments. But I do at least hope, they use extreme caution and so only slowly, (over a period of a many months) move to (even currently possible) higher energy collision experiments, in very small increments. While its easy to assume they will, they have shown too many times how worried they are other experiment teams are going to get to the noble prize winning results first, so they do have extreme pressure on them, to rush into the higher energy experiments to show results fast).

    This is the only experiment in human history where we cannot learn from our mistakes. We have to be 100% certain it is safe, before each new step up is even attempted. (Too many mistakes have already been made and we have yet to even get into the more possible dangerous aspects of the experiments).

  • by Estanislao Martínez (203477) on Wednesday January 28, 2009 @09:16PM (#26647953) Homepage

    In fact, you make the same mistake that the authors appear to in your logic.

    No, it's not a mistake. It all comes down to the fact that there are two general types of interpretations of probability:

    1. The frequency at which one of the possible outcomes happens in repeated instances of an event of a specified type. For example, the probability of heads in a coin toss.
    2. The degree of belief that a cognitive agent assigns to a sentence. This degree of belief is related by the laws of probability to the degree of belief that an agent should assign to other sentences, in such a way that only some assignments are consistent (by a technical definition I won't go into here).

    Basically, you're treating this as an argument about probability in the first sense, when it is really about probability in the second sense. The argument is that even if your formulas lead you to asssign a degree of confidence of .00000000000001 to the proposition that the LHC will not destroy the Earth, that means very little if we assign a degree of confidence of .000001 to the proposition that you are wrong.

    The point now, which other posters in this thread have made in other ways, is that the frequency model for probability theory is not relevant here, because this situation is not like a coin toss. For the situation to be like a coin toss, we would have had to do something like run the LHC a gazillion times, and observe how many of those times it ended up destroying the Earth. Therefore, the probabilities must be interpreted as degree of belief, and the number produced by any formula must be tossed out if the probability of getting the formula wrong is bigger than that number.

    It's this fallacious reasoning -- that if the theory is wrong, the probability of the event must be greater -- that make this article technically true, but useless.

    The assumption you're making here is that the number is the "probability of the event." Again, it is not; it is the degree of belief warranted to a specific proposition, given some other information.

  • by jnaujok (804613) on Wednesday January 28, 2009 @09:24PM (#26648003) Homepage Journal

    Their whole study is voodoo. This isn't physics, it's just math. Simple math doesn't have grey areas.

    The maximum output for the LHC is in the 10^15eV range. That's the same as many cosmic rays. In fact, the rate of cosmic ray impact at about 10^15eV is about one impact, per square meter of the Earth's surface, per year.

    The Earth's surface area is 5.10227658 × 10^14 meters. We can assume cosmic rays have been pouring in at roughly the same rate for about 4.6 billion years, or the age of the Earth.

    That means, we have not seen a "Earth Ending Event" in 2.34704723 × 10^24 chances. And that ignores that there's a large portion of cosmic rays that come in with an energy GREATER THAN 10^15eV.

    There's no gray area here. Even assuming that we've gotten incredibly lucky and the very next cosmic ray impact on the Earth will cause a black hole, ... oops, not that one, no the next one, oh wait (never mind)... The odds of any single collision causing a black hole event would be at best in the range of 1:1x10^24. That's not one in a hundred billion, that's:

    1 : 1,000,000,000,000,000,000,000,000

    That's one in a septillion. That's close to the number of stars in the visible universe. This whole thing is nothing but a ridiculous anti-science rant.

  • by spiedrazer (555388) on Wednesday January 28, 2009 @09:24PM (#26648009) Homepage
    A black hole CAN NOT BE CREATED By US!!! Even if several thousand atome worth of matter were smashed together into an area one millionth of an atomic nucleus, one thousand atoms worth of gravity doesn't amount to anything in the scale of the real world. even if these atoms stayed in that configuration for many seconds or minutes, they still don't have enough mass to create gravity that could start pulling in other matter, especially since the collisions are set-up in a very high vacume and all the surrounding matter (sensors etc.) are bolted very tightly to a very sturdy base. The fact that people continue to debate this issue just astounds me. A tiny bit of concentrated matter is still only a tiny bit of matter, no matter how much you consentrate it! Remember, a true black hole has the mass of a star in an area the size of a single atomic nucleus, so that's some pretty consentrated mass. You can hang a lead ball on a 2000 foot string next to a granite mountain face and only barely detect the deflection of the ball on the string. Gravity is a very weak force people.
  • Re:Voodoo Science (Score:5, Insightful)

    by SEE (7681) on Wednesday January 28, 2009 @09:26PM (#26648031) Homepage

    But for the LHC, arguably there is no accurate prior because nothing in that energy range has ever been done before

    How many natural events involving hadrons in LHC+ energy ranges do you need?

    99% of cosmic rays are made of hadrons (mostly protons and helium nuclei, but heavier nuclei are known), and they regularly collide with other objects made of hadrons (most of the mass of the visible universe) at LHC-plus energies.

    Want me to worry about the LHC? Tell me when a cosmic ray collision has turned the Sun into a black hole or strange matter or new Big Bang or whatever your LHC disaster scenario is.

  • by Roger W Moore (538166) on Wednesday January 28, 2009 @09:33PM (#26648097) Journal
    It's both right and wrong. The conclusion that we can't trust the probability of disaster if we got it wrong is correct...bloody obvious, but correct. The part where they use the population of the Earth to determine whether the LHC "risk" is acceptable is frankly insane. This seems to suggest that if Bird flu wipes out half the population then the "risk" of running the LHC is suddenly now more acceptable?
  • Re:Voodoo Science (Score:4, Insightful)

    by Roger W Moore (538166) on Wednesday January 28, 2009 @09:45PM (#26648205) Journal

    The implication, then, is that the LHC estimates should be independently done by other teams.

    But how can they be independent? They'll be basing their arguments on the same laws of physics which apparently only have a 1 in 10,000 chance of being right. The HUGE flaw in their assumption is that the probability of a paper being wrong is a flat 0.01%. It is not. Some papers use conservative, well established physics (such as the LHC safety report) others are pushing the boundaries. The LHC safety report uses the simple fact that we do not see planets and stars disappear into Black Holes to set a limit on any danger the LHC poses. Could there be a mistake in the calculation of the actual probability - yes there could. But it cannot be significantly different because we do not see stars and planets disappear!

  • by aliquis (678370) <dospam@gmail.com> on Wednesday January 28, 2009 @09:54PM (#26648307) Homepage

    Who would remember if we all died?

    At the end of days, at the end of time.
    When the Sun burns out will any of this matter.
    Who will be there to remember who we were?
    Who will be there to know that any of this had meaning for us?
    And in retrospect I'll say we've done no wrong.
    Who are we to judge what is right and what has purpose for us?
    With designs upon ourselves to do no wrong,
    running wild unaware of what might come of us.
    The Sun was born, so it shall die, so only shadows comfort me.
    I know in darkness I will find you giving up inside like me.
    Each day shall end as it begins and though you're far away from me
    I know in darkness I will find you giving up inside like me
    Without a thought I will see everything eternal,
    forget that once we were just dust from heavens fire.
    As we were forged we shall return, perhaps some day.
    I will remember you and wonder who we were.

  • Re:Voodoo Science (Score:3, Insightful)

    by yancey (136972) on Wednesday January 28, 2009 @10:42PM (#26648661)

    Isn't this exactly the sort of physics that the LHC machine was designed to investigate? Higgs boson and particle mass, to be sure. That's what we always hear about, but it's more than that. The LHC will be brought up to full power gradually, over a series of incremental tests and experiments, over months and years, looking for anything unusual in the data, something we haven't anticipated. The data from those experiments can be examined for signs of black hole formation. If they do appear anywhere below LHC maximum energy, then that data can be analyzed before taking the next step, and so on. We feared the sound barrier, feared fusion weapons, feared nuclear power reactors, feared space, and so on. With each of those, we expermimented, we learned, and we came to accept each in time.

  • Re:Voodoo Science (Score:5, Insightful)

    by ppanon (16583) on Wednesday January 28, 2009 @11:33PM (#26649045) Homepage Journal
    Hmm. Well, the paper's argument is like saying that, if the average number of bugs (across all software and methodologies) in N lines of code is X, then somebody's claim that they have written a piece of software with M bugs in Y lines of code, where M/Y << N/X is bogus.

    This is patently ridiculous. If I write a relatively small piece of software where I have carried out a formal mathematical proof of the algorithms used in that software, I should obtain a much better bug ratio than the industry average, which includes work done by code monkeys working 90 hour work weeks.

    Put another way, it's not clear to me that the statistical results for papers where an error might mean a measured loss of academic status are relevant to papers where the analysis regards the possible destruction of the Earth. So far the sample size on the latter is pretty small but the ones that have predicted the absence of global life-ending catastrophe have been 100% accurate. Of course they would have to be or we wouldn't be around to speculate about it, so we can't really make a conclusion from that either. But the point is that the foundation of this paper's statistical argument is itself invalid.
  • by twostix (1277166) on Wednesday January 28, 2009 @11:53PM (#26649163)

    "Honestly, if the human race has to end, that is exactly how I want us to go out."

    You or a handful of individuals anywhere don't get to choose that. It's unspeakably arrogant to even hold a fleeting thought that you do, and the real world and people in it otherwise known as the human race will smack you down the moment you attempt to apply it to real life.

    And it's for that very reason that large projects like the LHC come up against so much opposition. Fear of the unknown fueled by arrogant, juvenile, man-children spouting utter garbage like the above and reaffirming to the average man on the street the belief that the 'scientific community' is very much a separate group of crazies that can't be trusted to not kill everyone. Funnily the (majority) of scientists themselves are not the ones who talk this sort of rubbish, it's the hanger-ons, the zealots and the fanboys. But to the wider community it appears the same. In this thread alone at the moment it's about 50/50 scientific arguments vs rubbish like this.

    If you don't care about your own life that's fine. But don't expect the average man on the street to ever accept the risk of death to themselves and families for your particular cause.

  • Re:Voodoo Science (Score:5, Insightful)

    by bucky0 (229117) on Thursday January 29, 2009 @12:53AM (#26649539)

    But for the LHC, arguably there is no accurate prior because nothing in that energy range has ever been done before.

    We are regularly bombarded with particles with 10^6 times more energy than the LHC produces. We can observe interactions much more intense than that in the visible universe.

    Supposing all the scientists are wrong about their risk estimates, we should've observed the naturally occuring events at some point.

  • by techno-vampire (666512) on Thursday January 29, 2009 @01:44AM (#26649827) Homepage
    Second, it proves they do not know precisely what they are doing.

    Of course they don't know precisely what they're doing. That's why what they're doing is called an experiment. If they did know precisely what they were doing, it wouldn't be an experiment, would it?

  • by ebuck (585470) on Thursday January 29, 2009 @03:27AM (#26650345)

    Basically all the arguments for black hole creation fail when you ask the question, "Where are you going to get all the mass to create the black hole?"

    A black hole has much more mass than our planet. Energy released from the destruction of mass is supposed to be very large; even if it were possible to convert energy into mass at the LHC, the mass gain should be negligible.

  • by PMBjornerud (947233) on Thursday January 29, 2009 @05:41AM (#26650979)

    Exactly. We rolled the dice once with the Manhattan Project. Before the first nuclear bomb was detonated, no one could prove with 100% certainty that the bomb would not ignite the entire planet's atmosphere. They could show that it was very unlikely to happen, but not impossible. So the dice were rolled and we got lucky. How many times can we roll the dice before our luck runs out?

    When humans created the first man-made fire, nobody could prove with 100% certainty that the fire "would not ignite the entire planet's atmosphere".

  • by Alomex (148003) on Thursday January 29, 2009 @06:55AM (#26651319) Homepage

    The headline and summary are misleading but the main point of the paper stands. Once we are talking about probabilities of one-in-a-million or less, other second order terms come into effect.

    Example: the probability of the blood "not being from OJ Simpson" was declared to be "one-in-six-billion". Well at those orders of magnitude the probability of an unknown-to-him twin brother are higher than that. Of course I'm not claiming he has one. In all likelihood he doesn't, it's just that the probability of that event is around 1-in-100 million, which far outweighs the 1-in-6,000,000,000 given by the genetics "expert".

    So the correct thing to say is that the chances of the blood not being OJs is one-in-100,000,000. Good enough for me to convict and scientifically accurate. The other figure is nonsense.

  • by Morty (32057) on Thursday January 29, 2009 @07:45AM (#26651577) Journal

    Black holes do not require lots of mass, they require lots of density. If matter is packed into an area less than that matter's Schwarzschild radius, you have a black hole. There is a real theory that this experiment will create a black hole. However, the same theory that says that a black hole could be created also says that black holes should be created all the time in Earth's upper atmosphere. Small black holes are harmless because they rapidly evaporate. Regardless of what will be created, the LHC is just recreating events that occur all the time in our upper atmosphere, so saying that it could be harmful is kinda stupid -- if there were a significant risk, we would already be dead.

  • Re:Clarifications (Score:1, Insightful)

    by Anonymous Coward on Thursday January 29, 2009 @07:56AM (#26651629)

    "The overall risk is very small, but larger than the raw calculations suggest, and non-negligible when there are 6.5 billion lives at stake."

    How can you conclude this? You state that there is a chance that this calculation is flawed. Well..that's obvious.

    The probability of an estimate being flawed says exactly nothing about what the correct estimate should be.

    Indeed, the error in the estimate could go either way. It could even be that the chances are in fact even smaller.

    The reality is, however, that to all probability the calculation is correct.

    You state that you "really want to know what the chance of the disaster happening [is]". Well in that case: look at the calculations and see if any flaws can be found instead of generalizing about "a probability of any scientific study being flawed".

    Any ways, if you really are interested in that probability, the factors that influence this probability should be studied: amount of reviews, type of study, qualifications of the authors etc. That would give you a model for judging the real chance of mistakes for this particular study.

    Further you would have to find a link between the flaws and the amount at which the results are wrong, so you could estimate the chance that this particular study could be so incredibly wrong that there is a reasonable chance they overlooked a factor that would make the disaster happening a real possibility.

  • Re:Voodoo Science (Score:3, Insightful)

    by jandersen (462034) on Thursday January 29, 2009 @08:56AM (#26651991)

    In other words, since the upper bounds of a catastrophic outcome is a least the probability that they were wrong

    It is not clear that this is the case. In fact: P(X)!=P(X|A)P(A)!+P(X|A)P(A) [from the actual article]. Your interpretation is only correct if the probablity that it goes is 100% if the assumptions are wrong.

  • Re:Voodoo Science (Score:3, Insightful)

    by HarvardAce (771954) on Thursday January 29, 2009 @02:44PM (#26656583) Homepage

    Hmm. Well, the paper's argument is like saying that, if the average number of bugs (across all software and methodologies) in N lines of code is X, then somebody's claim that they have written a piece of software with M bugs in Y lines of code, where M/Y << N/X is bogus.

    This is patently ridiculous. If I write a relatively small piece of software where I have carried out a formal mathematical proof of the algorithms used in that software, I should obtain a much better bug ratio than the industry average, which includes work done by code monkeys working 90 hour work weeks.

    To apply what the article is saying to your analogy, it would be refuting your "bug-free code" by the fact that the "formal mathematical proofs" you are using may in fact be flawed. So by basing your "proof" on the things that themselves might have bugs in them, then it's quite possible that your software has bugs.

    A much better analogy using software would be the following:
    Suppose you write some code that has a 99.9% chance of being bug-free. You could then state that this program has a 99.9% chance of being bug free. However, if you now use a compiler that has a 1% bug rate, you can no longer say that your compiled program is a 99.9% chance of being bug free. At best, you can say that it has a 99% chance of being bug free. In much the same way, the original calculations were done assuming certain things were 100% accurate. The point of this article is that those certain things that are assumed to be 100% accurate, when actually empirically examined, are only correct 99.99% of the time. So if your "axioms" are only 99.99% correct, then you cannot prove anything with those axioms to be more than 99.99% correct.

It is not for me to attempt to fathom the inscrutable workings of Providence. -- The Earl of Birkenhead

Working...