Forgot your password?
typodupeerror
Science

ATLAS Results: One Higgs Or Two? 73

Posted by Soulskill
from the one-higgs-two-higgs-red-higgs-blue-higgs dept.
TaeKwonDood writes with news from CERN about more results in the search for the Higgs Boson, this time from the ATLAS experiment. Researchers report peaks in the data in accordance with what they'd expect from the Higgs. The curiosity is that the peaks are a couple GeV away from each other. "The ATLAS analyses in these channels return the best fit Higgs masses that differ by more than 3 GeV: 123.5 GeV for ZZ and 126.6 GeV for gamma-gamma, which is much more than the estimated resolution of about 1 GeV. The tension between these 2 results is estimated to be 2.7sigma. Apparently, ATLAS used this last month to search for the systematic errors that might be responsible for the discrepancy but, having found nothing, they decided to go public." Scientific American has a more layman-friendly explanation available. As this work undergoes review, physicists hope more eyes and more data will shed some light on this incongruity. Tommaso Dorigo, a particle physicist working at the CMS experiment at CERN, writes, "Another idea is that the gamma-gamma signal contains some unexpected background which somehow shifts the best-fit mass to higher values, also contributing to the anomalously high signal rate. However, this also does not hold much water — if you look at the various mass histograms produced by ATLAS (there is a bunch here) you do not see anything striking as suspicious in the background distributions. Then there is the possibility of a statistical fluctuation. I think this is the most likely explanation." Matt Strassler provides a broader update to the work proceeding on nailing down the Higgs boson.
This discussion has been archived. No new comments can be posted.

ATLAS Results: One Higgs Or Two?

Comments Filter:
  • by Sponge Bath (413667) on Friday December 14, 2012 @06:25PM (#42296159)

    One is the Happy Higgs, the other an Angry Higgs. Being angry adds 3GeV.

  • by organgtool (966989) on Friday December 14, 2012 @06:30PM (#42296257)

    Apparently, ATLAS used this last month to search for the systematic errors that might be responsible for the discrepancy but, having found nothing, they decided to go public.

    It looks like ATLAS...
    puts on sunglasses
    shrugged
    YEEEEAAAAHHH!!!!!!

  • Obesity (Score:4, Funny)

    by Hardhead_7 (987030) on Friday December 14, 2012 @06:36PM (#42296375)
    After they started weighing the Higgs, it went on a diet.
  • by Anonymous Coward on Friday December 14, 2012 @06:39PM (#42296439)

    So the polytheists were right? :P

    • Gods' particles (Score:2, Informative)

      by Anonymous Coward

      I fixed the title for you.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        As in, the left particle is a bit larger and lower?

  • by overshoot (39700) on Friday December 14, 2012 @06:42PM (#42296493)

    is not, "Eureka!"

    It's "What the fuck?"

    • by Anonymous Coward on Friday December 14, 2012 @07:07PM (#42296809)

      I agree. I was actually annoyed that they found the Higgs where they predicted it to be. That meant that the Standard Model was all wrapped up, but without any sort of loose threads which we could pull to work out wtf was up with quantum gravity, or dark matter and dark energy. (Like the way the photoelectric effect and the ultraviolet paradox were the loose threads that lead to quantum mechanics.)

      It would be really exciting if this result was real, and there's something funky about the Higgs which means the different experiements are detecting it at different masses. It might give us the kick in the pants we need to better understand the universe.

      That said, it's probably going to be something banal like a miscalibrated detector shim, or an unaccounted-for term in one of the equations ... but a geek can hope.

      • by dreamchaser (49529) on Friday December 14, 2012 @08:42PM (#42297685) Homepage Journal

        It didn't mean the Standard Model was all wrapped up. It meant what we currently understand of the what we call the Standard Model was wrapped up. That never precluded solving any of the other 'wft' problems you mention with new physics down the road. Models can fit within new models.

        • The problem is that if there is no New Physics in the Higgs, the next stage where we predict New Physics is the strong force unification scale that would require an accelerator a trillion times more powerful than the LHC to explore. We're talking a particle accelerator the diameter of the asteroid belt.

          But there's still hope in the TeV-per-parton scale the upgraded LHC will be able to reach in terms of finding what keeps the Top mass in check. Plus, dark matter's got to be made of something goddamnit, and it's hopefully not just cold neutrinos.

          And of course, there may be new shit that we haven't even considered yet!
          • We're talking a particle accelerator the diameter of the asteroid belt.

            Shouldn't be too hard; put the beam deflection electromagnets in orbit and just shoot the beam through the vacuum of space. Not quite a ringworld, but close enough for me.

          • The problem is that if there is no New Physics in the Higgs, the next stage where we predict New Physics is the strong force unification scale that would require an accelerator a trillion times more powerful than the LHC to explore. We're talking a particle accelerator the diameter of the asteroid belt.

            But there's still hope in the TeV-per-parton scale the upgraded LHC will be able to reach in terms of finding what keeps the Top mass in check. Plus, dark matter's got to be made of something goddamnit, and it's hopefully not just cold neutrinos.

            And of course, there may be new shit that we haven't even considered yet!

            I never said the Higgs had anything to do with new physics. I simply said that the existence of the Higgs as predicted didn't preclude new physics. Said new stuff may not even be testable in our lifetimes, partially for reasons you point out. That doesn't mean it isn't out there.

      • ...Like the way the photoelectric effect and the ultraviolet paradox were the loose threads that lead to quantum mechanics...

        Apparently, some time after being generally accepted, quantum mechanics, was applied to solve the ultraviolet "catastrophe" as opposed to the other way round, while the photoelectric effect played a prime role as you say (Einstein got a Nobel prize for it).

        • Re: (Score:3, Interesting)

          by maxwell demon (590494)

          I think with "ultraviolet paradox" he meant the problem Max Planck solved, that with classical physics you'd calculate the intensity of thermal (black body) radiation to always grow with growing frequency, giving rise to infinite total thermal radiation. Planck solved that problem by introducing the quantum hypothesis, that radiation energy can only be emitted in fixed portions proportional to the frequency.

          The problem I think you are referring to is a problem in quantum field theory where certain integrals

      • by mikael (484)

        Maybe there is a different gravitational field due to different terrain. CERN had a problem where their guide beams would go off target depending on the time of day. Turned out the moon was actually causing distortion to the surrounding land by a few metres, just like water tides. That was enough to change the shape of the collider ring.

      • by ceoyoyo (59147)

        Don't worry, the standard model still has LOTS of loose threads:

        http://en.wikipedia.org/wiki/Standard_model#Challenges [wikipedia.org]

        The Higgs itself exacerbates one.

  • Or was he a cat?
  • by Anonymous Coward

    "if we had to get excited at every slight disagreement between our measurements and our expectations, we'd be sick with Priapism (sorry ladies for this gender-specific pun)."

    OK, time to stop the Viagra at CERN?

  • Probably chance (Score:5, Informative)

    by Carnildo (712617) on Friday December 14, 2012 @07:42PM (#42297165) Homepage Journal

    2.7 sigma isn't actually that much: assuming a Gaussian distribution of data, it's a one-in-a-hundred chance of being randomness rather than a real difference (or in other terms, about one experiment in a hundred will generate a false signal). For comparison, the standard for announcing a new particle is 5 sigma (1 in 1.7 million chance of it being a false positive).

    • Re:Probably chance (Score:4, Insightful)

      by Chuckstar (799005) on Friday December 14, 2012 @09:49PM (#42298129)

      Yes. But 1:100 still means that it is probably more likely to be a systematic error. I think about it as asking the following question: "Is the probability that there is a systematic error of this magnitude in ATLAS or CMS greater or less than 1:100?" Considering the complexity of the systems, I would tend to think the probability of that kind of error being systematic is better than 1%.

      In other words, in order for random error to be more likely, ATLAS and CMS both need to be accurate at measuring collision results in an energy range that we've never measured before, to an accuracy of better than 99%... and that accuracy range needs to include all of the numerical analyses and modeling assumptions that are used to build up from the experimental results to the final conclusion. That seems like a pretty high bar to me. (Neither experiment just dumps out a number that is the mass of the Higgs boson. Both require interpretation of experimental results to get to the mass. Errors in the interpretation process need to be part of that 99% number.)

  • by noobermin (1950642) on Friday December 14, 2012 @08:09PM (#42297393) Journal

    So, I'm working on a cutesy undergraduate project with a physicist who works at the LHC (he is involved in a group searching for supersymmetry, so his primary work isn't the Higgs, but yeah). The project is a prototype for a new photo-detector.

    In any case, I finally got the data acquisition working just a few days ago, (it uses a maple, a sort of a faster arduino. I just use the usb serial thingy, and 'cat' and file redirection, lol) so I ran and took some data counting hits from cosmic rays and with some easy python scripting to parse the output, I had a nice rough estimate of cosmic ray flux over my detector. I did a quick wiki search, and found a rate that was within the order of my result, so I typed up a mini report and emailed him my quick and dirty results while noting they were just that, quick and dirty. I was actually kind of proud of myself.

    But, then he sent back this email, if I may quote him:

    Thanks for the update. The approach of checking whether the coincident rate makes sense is a good step, but you need more information. Imagine, for example, that no one had every measured this before. Then, instead of checking if your answer is compatible with Wikipedia, we would be preparing to publish the measurement, and staking our reputation on its validity. In that case, we'd want to do a variety of things to be convinced that it is correct. I can imagine a couple of things to do[...]

    And he listed a number of things try so I can be really sure of my measurements.

    Think about this, the ATLAS guys could have announced the possibility of two peaks in their data and blown our frickin heads off into outer space after having already blown them off our shoulders with the Higgs, but they didn't because it wasn't a sure bet, as TFAs say...it could be background, it could be statistical fluctuations... In any case, there is something very wise about physicists and scientists in general who are often very cautious and untrusting about their measurements and are more than willing for you to double check their measurements and prove them wrong. Well, it could just be for reputation's sake. Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

    This is not to take away from the times when certain people forget to tighten their fiber optic cables...but actually, that works wonderfully for my point...I sometimes feel impatient when I hear, "we're not quite sure yet" or ,"it's only preliminary" from some of these reports and I imagine some of you might too. Nonetheless, science isn't really star trek where you make a discovery, get locked into a phaser fight with it, and make peace in an hour time frame. It is a slow, careful process that at the end, as we see, yields good results in technology and the advances we have today. Therefore, it's worth the wait. So, have some patience, my reputation is on the line.

    • Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

      On the other hand: What if you did publish your work early and often, not as concerned with slowly and deliberately ensuring everything is just right before spreading the information -- Not keeping quiet just so that you can be the one with the badge of "1st"? Why, then worldwide cooperation could kick in. Perhaps other interested parties would help you prove or disprove the results much more quickly. Thus, accelerating the speed of scientific progress. Now, don't get me wrong. I'm not saying it's wro

      • by scheme (19778) on Friday December 14, 2012 @10:41PM (#42298431)

        Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

        On the other hand: What if you did publish your work early and often, not as concerned with slowly and deliberately ensuring everything is just right before spreading the information -- Not keeping quiet just so that you can be the one with the badge of "1st"? Why, then worldwide cooperation could kick in. Perhaps other interested parties would help you prove or disprove the results much more quickly. Thus, accelerating the speed of scientific progress. Now, don't get me wrong. I'm not saying it's wrong to not publish things that you aren't absolutely sure about, just that what you're doing seems really strange to me. Have you questioned your information dissemination methods? A good scientist would... Why, there could have been times that you were wrong about being wrong, i.e. made a discovery but never known it too... Perhaps it's time to re-think the system of publishing altogether?

        The problem is that most early results are incorrect and after doing some checking, it turns out they were systematic errors or mistakes or something similar. If everyone published early and often, you'd get so many results (with most of the results being incorrect) that no one could track it to figure out which results were interesting and worth investing the time and effort to work on. Duplicating someone else's work takes a lot of time and effort and may involve building a lot of stuff or flying to another lab to learn new techniques. Unless you're really sure that the results are likely to pan out, why would anyone spend tens of thousands of dollars or a few months at another lab learning a technique?

        The costs and startup efforts are much higher for most sciences unlike code so open source techniques won't work as effectively. It's effectively like having to reimplement a good portion of a piece of software before you can start contributing.

      • by LourensV (856614) on Saturday December 15, 2012 @06:22AM (#42300357)

        Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

        On the other hand: What if you did publish your work early and often, not as concerned with slowly and deliberately ensuring everything is just right before spreading the information -- Not keeping quiet just so that you can be the one with the badge of "1st"? Why, then worldwide cooperation could kick in. Perhaps other interested parties would help you prove or disprove the results much more quickly. Thus, accelerating the speed of scientific progress.

        You are assuming that scientists only ever communicate by publishing papers in journals. That's incorrect, there is a lot (and I mean a lot) of informal communication and collaboration, by email and phone and through presentations and posters at conferences. Our knowledge is now so vast and much research being done so multidisciplinary that it's nearly impossible for any single person to know enough to really cover all the aspects of a particular investigation. In my field, you'd need to be a good programmer, an expert statistician, an experienced and knowledgeable (field- and theoretical) biologist, and a good systems analyst/modeller. Such people don't exist, so work is done in teams with each member contributing their specific expertise. When you get a weird result, you go and talk to your colleague about it, and try to figure it out together, and you keep going together until you feel that you really understand what's going on. And then you write the paper, it gets published, and then hopefully it won't turn out to have been a fluke or a mistake or not representative of the wider area of research. If the whole team can't figure it out, you might publish a "Hey, that's weird?" paper, as was done here.

        If people published everything they did immediately, we'd get so many publications that it would be impossible to keep up with all of it. The whole situation would be similar to the "Linus doesn't scale" problem in Linux kernel development a few years ago, where Linus Torvalds was inundated with patches and couldn't keep up. They solved that by appointing lieutenants, who filter and aggregate contributions. Publishing papers works the same way, you solve the smaller problems locally, and publish bigger and better-vetted results, so that everyone else doesn't waste their precious time on solving other people's small problems and consequently invalid results. Also, people wouldn't waste their precious time on writing up all those small problems, and peer-reviewing them, and so on. Writing a paper is not like writing a post on the Internet (something that some climate change deniers often conveniently forget), it takes serious time and effort by a group of people to make sure that the results are really of good enough quality. You don't want to waste that effort on trivial things.

  • Turns out, the Higgs was never the God particle, it was the Rabbit particle.

    True. If you didn't need to know that, why ask?

  • You'd expect to see differences in GeV over multiple experiments, varying muscle tension in sphincter muscles often exceeds 2.7sigma. When doing the Child or Folded Leaf pose the sigmoid is under compression. Here is a helpful video [youtube.com] that might dispel the mystery.

    CERN does deserve kudos for full disclosure. They could have blamed it on the dog.

  • It's pretty obvious that the heavier one is the evil Higgs. From this we can deduce that the mass of a subatomic goatee travelling almost at lightspeed is at least 3GeV.
  • 2.7 sigma is nothing to base any conclusions upon, not in science. C'mon. News at 11.
    • by ceoyoyo (59147)

      I hate to tell you, but most scientific fields put the threshold at 1.96 sigma. Actually, it's often lower because people are frequently sloppy about calculating their degrees of freedom.

      The standard is usually higher in particle physics partly because they do statistically naughty things like analyzing their data while they're collecting it, and partly because particle physicists can fairly easily just keep collecting data longer.

Hackers of the world, unite!

Working...