Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror
Science

ATLAS Results: One Higgs Or Two? 73

Posted by Soulskill
from the one-higgs-two-higgs-red-higgs-blue-higgs dept.
TaeKwonDood writes with news from CERN about more results in the search for the Higgs Boson, this time from the ATLAS experiment. Researchers report peaks in the data in accordance with what they'd expect from the Higgs. The curiosity is that the peaks are a couple GeV away from each other. "The ATLAS analyses in these channels return the best fit Higgs masses that differ by more than 3 GeV: 123.5 GeV for ZZ and 126.6 GeV for gamma-gamma, which is much more than the estimated resolution of about 1 GeV. The tension between these 2 results is estimated to be 2.7sigma. Apparently, ATLAS used this last month to search for the systematic errors that might be responsible for the discrepancy but, having found nothing, they decided to go public." Scientific American has a more layman-friendly explanation available. As this work undergoes review, physicists hope more eyes and more data will shed some light on this incongruity. Tommaso Dorigo, a particle physicist working at the CMS experiment at CERN, writes, "Another idea is that the gamma-gamma signal contains some unexpected background which somehow shifts the best-fit mass to higher values, also contributing to the anomalously high signal rate. However, this also does not hold much water — if you look at the various mass histograms produced by ATLAS (there is a bunch here) you do not see anything striking as suspicious in the background distributions. Then there is the possibility of a statistical fluctuation. I think this is the most likely explanation." Matt Strassler provides a broader update to the work proceeding on nailing down the Higgs boson.
This discussion has been archived. No new comments can be posted.

ATLAS Results: One Higgs Or Two?

Comments Filter:
  • by overshoot (39700) on Friday December 14, 2012 @06:42PM (#42296493)

    is not, "Eureka!"

    It's "What the fuck?"

  • by dreamchaser (49529) on Friday December 14, 2012 @08:42PM (#42297685) Homepage Journal

    It didn't mean the Standard Model was all wrapped up. It meant what we currently understand of the what we call the Standard Model was wrapped up. That never precluded solving any of the other 'wft' problems you mention with new physics down the road. Models can fit within new models.

  • Re:Probably chance (Score:4, Insightful)

    by Chuckstar (799005) on Friday December 14, 2012 @09:49PM (#42298129)

    Yes. But 1:100 still means that it is probably more likely to be a systematic error. I think about it as asking the following question: "Is the probability that there is a systematic error of this magnitude in ATLAS or CMS greater or less than 1:100?" Considering the complexity of the systems, I would tend to think the probability of that kind of error being systematic is better than 1%.

    In other words, in order for random error to be more likely, ATLAS and CMS both need to be accurate at measuring collision results in an energy range that we've never measured before, to an accuracy of better than 99%... and that accuracy range needs to include all of the numerical analyses and modeling assumptions that are used to build up from the experimental results to the final conclusion. That seems like a pretty high bar to me. (Neither experiment just dumps out a number that is the mass of the Higgs boson. Both require interpretation of experimental results to get to the mass. Errors in the interpretation process need to be part of that 99% number.)

  • by scheme (19778) on Friday December 14, 2012 @10:41PM (#42298431)

    Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

    On the other hand: What if you did publish your work early and often, not as concerned with slowly and deliberately ensuring everything is just right before spreading the information -- Not keeping quiet just so that you can be the one with the badge of "1st"? Why, then worldwide cooperation could kick in. Perhaps other interested parties would help you prove or disprove the results much more quickly. Thus, accelerating the speed of scientific progress. Now, don't get me wrong. I'm not saying it's wrong to not publish things that you aren't absolutely sure about, just that what you're doing seems really strange to me. Have you questioned your information dissemination methods? A good scientist would... Why, there could have been times that you were wrong about being wrong, i.e. made a discovery but never known it too... Perhaps it's time to re-think the system of publishing altogether?

    The problem is that most early results are incorrect and after doing some checking, it turns out they were systematic errors or mistakes or something similar. If everyone published early and often, you'd get so many results (with most of the results being incorrect) that no one could track it to figure out which results were interesting and worth investing the time and effort to work on. Duplicating someone else's work takes a lot of time and effort and may involve building a lot of stuff or flying to another lab to learn new techniques. Unless you're really sure that the results are likely to pan out, why would anyone spend tens of thousands of dollars or a few months at another lab learning a technique?

    The costs and startup efforts are much higher for most sciences unlike code so open source techniques won't work as effectively. It's effectively like having to reimplement a good portion of a piece of software before you can start contributing.

  • by Roger W Moore (538166) on Friday December 14, 2012 @11:04PM (#42298593) Journal

    Using bigger and bigger colliders, we can virtually create any particule with any property that fix the equations.

    I think you are ascribing far too much power to us particle physicists! We don't get to create whatever particle we want we can only create ones that can exist. What is remarkable is that the ones we think exist to solve inconsistencies actually turn out to be there. This means that our extrapolations from existing physics are extremely good at predicting new physics. In fact there are already theoretical models, such as supersymmetry (SUSY) which predict 5 Higgs bosons, two of which are charged...

  • by wonkey_monkey (2592601) on Saturday December 15, 2012 @06:00AM (#42300287) Homepage

    Turned out the moon was actually causing distortion to the surrounding land by a few metres

    Holy crap. We had to build the LHC to notice this?

    Are you sure it wasn't more like millimetres?

  • by LourensV (856614) on Saturday December 15, 2012 @06:22AM (#42300357)

    Whatever the motivation, I think this is one of the strengths of the scientific method and thus, one of the reasons for its success: we aren't quick to publishing until it is just right, and therefore, perhaps our best approximation of the "truth" we can muster.

    On the other hand: What if you did publish your work early and often, not as concerned with slowly and deliberately ensuring everything is just right before spreading the information -- Not keeping quiet just so that you can be the one with the badge of "1st"? Why, then worldwide cooperation could kick in. Perhaps other interested parties would help you prove or disprove the results much more quickly. Thus, accelerating the speed of scientific progress.

    You are assuming that scientists only ever communicate by publishing papers in journals. That's incorrect, there is a lot (and I mean a lot) of informal communication and collaboration, by email and phone and through presentations and posters at conferences. Our knowledge is now so vast and much research being done so multidisciplinary that it's nearly impossible for any single person to know enough to really cover all the aspects of a particular investigation. In my field, you'd need to be a good programmer, an expert statistician, an experienced and knowledgeable (field- and theoretical) biologist, and a good systems analyst/modeller. Such people don't exist, so work is done in teams with each member contributing their specific expertise. When you get a weird result, you go and talk to your colleague about it, and try to figure it out together, and you keep going together until you feel that you really understand what's going on. And then you write the paper, it gets published, and then hopefully it won't turn out to have been a fluke or a mistake or not representative of the wider area of research. If the whole team can't figure it out, you might publish a "Hey, that's weird?" paper, as was done here.

    If people published everything they did immediately, we'd get so many publications that it would be impossible to keep up with all of it. The whole situation would be similar to the "Linus doesn't scale" problem in Linux kernel development a few years ago, where Linus Torvalds was inundated with patches and couldn't keep up. They solved that by appointing lieutenants, who filter and aggregate contributions. Publishing papers works the same way, you solve the smaller problems locally, and publish bigger and better-vetted results, so that everyone else doesn't waste their precious time on solving other people's small problems and consequently invalid results. Also, people wouldn't waste their precious time on writing up all those small problems, and peer-reviewing them, and so on. Writing a paper is not like writing a post on the Internet (something that some climate change deniers often conveniently forget), it takes serious time and effort by a group of people to make sure that the results are really of good enough quality. You don't want to waste that effort on trivial things.

"Consider a spherical bear, in simple harmonic motion..." -- Professor in the UCB physics department

Working...