Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Higgs Signal Gains Strength 189

ananyo writes "Today the two main experiments at the Large Hadron Collider, the world's most powerful particle accelerator, submitted the results of their latest analyses. The new papers (here here and here) boost the case for December's announcement of a possible Higgs signal. Physicists working on the In the case of the Compact Muon Solenoid experiment, have been able to look at another possible kind of Higgs decay, and that allows them to boost their Higgs signal from 2.5 sigma to 3.1 sigma. Taken together with data from the other detector, ATLAS, Higgs' overall signal now unofficially stands at about 4.3 sigma."
This discussion has been archived. No new comments can be posted.

Higgs Signal Gains Strength

Comments Filter:
  • Re:Eh? (Score:5, Informative)

    by zero.kalvin ( 1231372 ) on Tuesday February 07, 2012 @07:39PM (#38961185)
    Certainty ? from a scientific point of view ? infinite! Sigmas in a way tells how probable is to get these results, the more sigmas you have means that the more improbable to get these results without invoking some other model/theory etc etc. So 4.3 is good but not good enough, we need at least 5 sigmas. (What said is not 100% correct, but a rough explanation) http://en.wikipedia.org/wiki/Standard_deviation [wikipedia.org]
  • Re:Eh? (Score:5, Informative)

    by ArAgost ( 853804 ) on Tuesday February 07, 2012 @07:44PM (#38961233) Homepage
    4.3 sigma corresponds to a confidence level of 99,998292% (credit to Wolfram|alpha). This is about as certain as death and taxes if compared to “everyday” events, but maybe it's not enough for theoretical physicists (I'm not one).
  • Re:Eh? (Score:5, Informative)

    by olsmeister ( 1488789 ) on Tuesday February 07, 2012 @07:44PM (#38961237)
    I think they usually require 5 sigma (99.9999426697%) for it to be official.
  • Re:Eh? (Score:5, Informative)

    by nomel ( 244635 ) <turd@inorMENCKENbit.com minus author> on Tuesday February 07, 2012 @08:25PM (#38961539) Homepage Journal

    Stupidly assuming you're talking American "football", 119.99993120364 yards, or 0.00247666896 inches from the line.

  • by slew ( 2918 ) on Tuesday February 07, 2012 @09:03PM (#38961873)

    Let's see, do any of these require exotic particle theory?

    Synchrotron light source? Uses good old maxwell equations to steer electron beams with magnetic fields to make x-ray radiation...

    Super conducting wire? The most viable theory behind cooper pairing is QM electron-phonon interaction which doesn't need any exotic particle theory...

    PET? That uses simple radioactive sugar (where glucose is fluoridated with radioactive fluorine-18) and the resulting gamma ray decays are imaged...

    Not to say that standard-model exotic particle theory isn't interesting, or doesn't explain certain physical things or certain astrophysical phenomena, but unlike QM, theoretical work on exotic particles has yet to prove economically useful. Century old QM theory on the other hand has helped us design flash memory, lasers, GMR disk drive heads, IC lithographic equipment, and has proven useful for racetrack memory, spintronics, quantum dot memory and maybe some day (economical) quantum computers.

    Perhaps the time will come for standard-model sub-atomic theory being a big economic payback, but it hasn't happened yet. This might have a lot to do with the fact that other than the standard Hadrons (proton, neutron), and the electron and photons, and practically invisible neutrino, we don't see much, if any, of the other ones except as cosmic radiation or inside particle accelerators, which means economically they are more of nuisance than something to exploit. Who knows, maybe the even the standard model is wrong and we won't see anything economically useful from this theory on exotic particles, but maybe its sucessor theory. We just don't know yet.

    It's easy to overestimate the impact of new theories. I'll wager that most cars today are still designed mostly assuming newtonian dynamics, and even more primitively, they got to the moon with a very low precision value for pi. Someday theories prove their worth, just like QM so it's worth investing, but overstating the case isn't being intellectually honest.

    To bring a more understandable analogy to the current audience. If you are a computer programmer, your boss may indirectly use Turing computability theory to claim that it isn't impossible for you to write a program to do what he wants it to do, and perhaps P~NP might be something in the back of your mind when you look for algorithms, but the latest computability theory about NP-intermediate set problems probably doesn't yet have any economic value to anyone (after all, they are still NP problems even if not NP-complete). Might be valueable some day, though...

  • Re:Eh? (Score:4, Informative)

    by mikael ( 484 ) on Tuesday February 07, 2012 @09:13PM (#38961939)

    Wikipedia has a good explanation at The 68-95-99.7 [wikipedia.org]

    How many sigmas you have is a way of summarizing how much area of the bell curve is covered or how far along to one end point the bell curve you are. Being further along means less chance of error

    From the page:
    +/- 1 sigma = 1 in 3 chances of being wrong
    +/- 2 sigma = 1 in 22
    +/- 3 sigma = 1 in 81
    +/- 4 sigma = 1 in 15,787
    +/- 5 sigma = 1 in 7,444,278
    +/- 6 sigma = 1 in 506,797,346
    +/- 7 sigma = 1 in 390,682,215,445

  • by Anonymous Coward on Tuesday February 07, 2012 @09:31PM (#38962141)

    The argument against direct economic benefits from modern high energy physics is stronger than you think. All the examples you give are for particles that had clearly measurable signatures in the 1930s. The Higgs and other particles that might be detected for the first time in the 21st century have such incredibly tiny effects on our world that we haven't been able to measure them despite looking diligently for a long time (40 years since the publication of the standard model). We can indeed engineer without knowing everything about what the universe is made of...in fact few engineers learn quantum mechanics and essentially none learn general relativity. All that is required to engineer is a model that gives predictions at the accuracy needed for design. And I can quite confidently predict that no engineering design in the next century is going to need the Higgs mass or anything beyond the standard model. That said, of course we should keep trying to figure out what the universe is made of...both because it is very interesting and because it may matter for engineering purposes in a millenium or two.

  • Re:Eh? (Score:5, Informative)

    by Seraphim1982 ( 813899 ) on Tuesday February 07, 2012 @09:46PM (#38962283)

    You managed to get the values for both 3 sigma, and 5 sigma wrong
    +/- 3 sigma = 1 in 370 (which is what clued me into them being wrong, 1/81 + 0.997 isn't close to 1)
    +/- 5 sigma = 1 in 1,744,278

  • Re:Eh? (Score:4, Informative)

    by epine ( 68316 ) on Tuesday February 07, 2012 @10:22PM (#38962555)

    This dialog is a bit of a mess, but makes some good points: Taleb on Antifragility [econtalk.org]

    These talks come with very loose transcripts. Here's the key passage at length as I shamelessly promote Taleb's upcoming book Antifragility [fooledbyrandomness.com], through I'm already certain I only agree with two-thirds of what he is putting forth (emphasis mine):

    It's because of convexity effects, because small probability is very convex to error. [] Take the Gaussian distribution. And actually in a separate paper I finally proved something that has taken me three years. Take a very thin-tailed distribution such as the Gaussian. Thin-tailed, the normal distribution. You have two inputs, one of which is standard deviation. Standard deviation is very much your error. Now, if you take a remote event, say, 6, 7, 8 sigmas, you increase the standard deviation away from the mean; you increase the sigma by 10%, the probability of that is multiplied by several thousand, several million, several billion, several trillions. So, what you have, you have nonlinearity of remote events to sigma, to the standard deviation of the distribution. And that, in fact if you have uncertainty, the smallest uncertainty you have in the estimation of the standard deviation, the higher the small probability becomes and at the same time, the bigger the mistake you are going to have about the small probability. So, in other words, most of the uncertainty in parameterizing the model, most of the tails. So, you take an event like Fukushima, you see, where they said it should happen every million years; you perturbate probabilities a little bit and one in a million becomes one in thirty. Or the financial crisis. Or anything.

    Some of those sigmas are model guards, not actual certainty.

  • Re:Eh? (Score:5, Informative)

    by Anonymous Coward on Wednesday February 08, 2012 @03:10AM (#38964069)

    All this is under pure mathematician's "null-hypothesis" assumptions. That is, we have a 99.999999999% confidence level of being right, unless we are making any mistake in our set of thousands of assumptions, there is any miscalibration, any fundamental error, systematic errors, ...

    But this is not a mathematical exercise. It is a physics experiment. Knowing how the CMS/ATLAS collaboration works and how politized it is, If there is a (subtle but likely) mistake, then this number means nothing.

    The correct reading would be: "we are 99.99999999% (or whatever) sure that if we are wrong it is not due to a purely random statistic fluctuation"

    Other than that 5-sigma is a mere convention on when to trigger a press conference to declare "discovery"

Gravity brings me down.

Working...