Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science Technology

Fermi Lab's New Particle Discovery in Question 62

"Back in April physicists at Fermilab speculated that they may have discovered a new force or particle. But now another team has analyzed data from the collider and come to the exact opposite conclusion. From the article: 'But now, a rival team performing an independent analysis of Tevatron data has turned up no sign of the bump. It is using the same amount of data as CDF reported in April, but this data was collected at a different detector at the collider called DZero. "Nope, nothing here – sorry," says Dmitri Denisov, a spokesman for DZero.'"
This discussion has been archived. No new comments can be posted.

Fermi Lab's New Particle Discovery in Question

Comments Filter:
  • by Anonymous Coward

    Maybe the new particle is there and not there at the same time.

  • Data sharing (Score:5, Insightful)

    by symes ( 835608 ) on Monday June 13, 2011 @08:22AM (#36423962) Journal

    I think more than anything, this demonstrates why sharing data openly is such a good thing. Sure, not great news for those at Fermi Lab, but if scientists generally (especially those in the behavioural sciences...) were encouraged (or forced?) to allow others free access to their data then I'm sure a few surprising claims might be rewritten and a few interesting blips otherwise missed might be found.

    • by Anonymous Coward

      Not great news? All results are great news. That's the difference between science and self-fulfilling prophecy delusions.

      • by johanw ( 1001493 )
        But still, some results are greater news than others. The finding of a new particle is great news, to determine another decimal of some physical constant will only interest a few specialists.
    • Re:Data sharing (Score:5, Interesting)

      by The_Wilschon ( 782534 ) on Monday June 13, 2011 @09:08AM (#36424262) Homepage
      These experiments do not share their data openly (while the experiment is still taking data) because if they did, there would not be any data. The only way to get enough physicists to work on the experiment to make it run well enough to get any data is to restrict data access to those who do service work on the experiment. After the end of data taking, the data may be released, but I don't know the time table on which that typically occurs.
      • Imagine how much more physics would get done if somebody worked out an 'open source' model for physics. Computer people used to do clever things and then publish their results in journals, sometimes with source.

        I won't pretend to be clever enough to know what that model is, but it probably exists.

      • Re: (Score:2, Insightful)

        by mcelrath ( 8027 )

        Bullshit. Look to the the astronomy community for counter-examples. WMAP, SDSS, etc.

        The only reason particle physics keeps its data closed is history and turf-protection by its members. Astronomy has a longer history, and realized the benefits of sharing star catalogs hundreds of years ago.

        • by delt0r ( 999393 )
          Lots of Astronomy data is held by the "collection group" for a year or so to allow dibs on publication. ie all Hubble data is like this and all the big telescopes are like this (VLT etc). Particle physics is not that different. There is a embargo period IIRC and then its opened up.

          It is hard to get funding if you don't get papers out. You won't get the papers out if you spend all the hard work of making the data available rather than analyzing it.
          • by mcelrath ( 8027 )

            There is absolutely no requirement to share data in particle physics. Most of the data from early colliders is irretrievably lost. There's nothing wrong with time embargoes, but that's not what's going on here.

            • by delt0r ( 999393 )
              There is no requirement in Astronomy either as far as i know. We didn't publish a lot of our data, but it was available on request . But like in particle physics, colleagues expect it. A lot of data does not make it to the "public domain", but that is often more a issue of the time and money it costs to do so. This is true for every field and it was a lot more expensive 50 years ago which is why it something that has really only started to happen (lets face it, you didn't make a entire copy of plates from y
              • by mcelrath ( 8027 )

                The data volume issue is brought up occasionally, but is a red herring. The SDSS dataset is comparable in size to that of Fermilab and CERN and is available to the public [sdss.org] (hundreds of TB). No one said anyone should publish raw data either. A processed, manageable form is preferable. If the datasets are that large, then we should be working on publicly-funded data warehouses, just as we once built libraries across the country.

                Astronomy has a history of sharing data, unlike particle physics. Furthermor

                • by delt0r ( 999393 )
                  I know this is old but anyway. 100TB is not even going to cover 1 week of the LHC with a estimated 15 Petabytes per year [wikipedia.org]. The ATLAS detector along can create over 23 petabytes per second [wikipedia.org] of raw data. Clearly even with today's HDD and computers you must do some on the fly processing to reduce that to manageable levels. But it is still a class of its own in terms of data by orders of magnitude. The American university that are involved with processing the data have dedicated links IIRC, since a standard inte
                  • by mcelrath ( 8027 )

                    Clearly, one should not distribute raw data, but rather processed data after the experimentalists have taken into account detector resolution, triggering, etc. (In itself a hard problem, I know). No one does actual physics analyses on the raw data anyway. Everyone uses skims or otherwise reduced data.

          • A friend of mine does research on flare stars. For this you need to look at stars for long periods.

            If you think about it, there's a major experiment underway that's already looking at lots of stars for long periods: The search for extrasolar planets. And even better, when one of these experiments finds a flare star, that data isn't especially interesting because they're looking for planets that can sustain life, and frequent solar flares are ... unhelpful in that regard. As long as the data is properly a

        • Re:Data sharing (Score:4, Informative)

          by Anonymous Coward on Monday June 13, 2011 @10:16AM (#36424762)

          Ex particles guy writing here --- the reason that data isn't immediately shared is that data acquisition and first pass analysis have to be done before you even *think* about looking for new physics. Moreover, the detector systems are complex enough, that it is really hard to be sure the analysis works correctly when you were the one who built the bleedin' thing. Then there's the other half -- almost no detector has complete coverage -- certainly none of the detectors at FNAL or CERN do so you are at the mercy of Monte Carlo simulations to work out the corrections. So you have to do the experiment twice; once is the physical world and once in a virtual world. Mismatches between the worlds can easily lead to spurious signals. Not saying that astronomy is any easier -- at least as its practiced now a days. And WMAP, for example, doesn't seem to be giving away the raw data. There is some turf protection -- "we invested blood sweat and tears as well as years of our lives to build the detector -- we get first crack at the data" -- I don't think that's a bad thing.

          The particle physics community does have the equivalent of a star map it's the Review of Particle Properties (RPP).

        • by Anonymous Coward

          Star catalogs aren't data: they're the results of decades of observations, corroborations, corrections and debates over just exactly what that particular black spot on the white plate was. You want the raw telemetry from every telescope that isn't read out with a Mark I eyeball, and every plate ever taken and scientist's observation note from those that were? You want all the calibration data from WMAP, and all the histograms that were plotted to analyze them and turn them into corrections for the main data

        • Re:Data sharing (Score:5, Insightful)

          by chissg ( 948332 ) on Monday June 13, 2011 @10:48AM (#36425040)
          [Re-post non-AC] Star catalogs aren't data: they're the results of decades of observations, corroborations, corrections and debates over just exactly what that particular black spot on the white plate was. You want the raw telemetry from every telescope that isn't read out with a Mark I eyeball, and every plate ever taken and scientist's observation note from those that were? You want all the calibration data from WMAP, and all the histograms that were plotted to analyze them and turn them into corrections for the main data so they actually *mean* something? Particle physics, "data" is the 1s and 0s from every piece of sensory equipment in the detector hall, beam area and points between: often millions of readout channels, each of which means something and has its own quirks and problems that need to be measured and understood with more and different types of data (calibration, cosmic rays, etc). And, these readings are taken at frequencies between thousands and millions of times per second. We often have to analyze the data to a preliminary level just to decide whether they're worth keeping to analyze properly later because there's neither the bandwidth nor the storage space nor the computing power -- even now -- to keep them all. The LHC experiments store petabytes of data per month, and storage, access and transfer costs are significant: you pay for access to those data by contributing to the experiment. OK, now let's assume you get the raw data. Now what? Good luck with that. There's a reason scientist groups and expert contractors spend years and sometimes decades writing the reconstruction and analysis software for particle physics experiments: teasing useful results from the data are hard. If we were to spend our timing pointing out the rookie mistakes of every schmo who fiddled with the data for a while and thought he'd found something new, the work would never be done. "Heisenberg's Uncertainty Principle Untenable [icaap.org]," anyone?
          • by mangu ( 126918 )

            I agree with all you said about raw data and must add there's one more reason why data must be culled: test runs. There are many times when one runs an experiment several times with slightly different parameters and then choose the best configuration and ignore the others.

            I have many sets of data that I may use later as a basis for performing further research, but for the moment they stand alone because I didn't follow them with more measurements under the same configuration. These are perfectly valid resul

      • These experiments do not share their data openly (while the experiment is still taking data) because if they did, there would not be any data. The only way to get enough physicists to work on the experiment to make it run well enough to get any data is to restrict data access to those who do service work on the experiment. After the end of data taking, the data may be released, but I don't know the time table on which that typically occurs.

        And how exactly do you release raw data? You know, this isn't a well tagged HTML page we are talking about. It is raw binary data, that unless you have all the programs to read and analyze it, it means nothing. Data gets released after it is munged into a state that it can be shared, which means a ton of cleaning up and indexing. It has NOTHING to do with any conspiracy by physicists to keep their data secret for any length of time.

    • I understand keeping the data to yourself/group while you analyze and perhaps publish. I understand not just throwing out huge data sets for everything, but giving it if asked for. Just as long as the data is not made 'confidential' forever, though I'm sure some (many?) may be classified as 'secret' due to government involvement.

    • I agree that sharing data is a good thing in general, but in this case, it wasn't a reanalysis of the same data but a second set of data from a different detector, DZero, at the Tevatron, this the original data coming from the CDF detector. The contradiction between the data isn't resolved, but presummably this was some systematic error in the data from the CDF.

      ---

      Particle Physics [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]

  • Turns out some colleagues were in the next room turning hair dryers on and off during the tests.

  • old news? (Score:2, Informative)

    by Anonymous Coward

    Wasn't there a story on slashdot just last week about the people who released the data saying the same thing?

    http://science.slashdot.org/story/11/06/10/1455240/Data-Review-Brings-Major-Setback-In-Higgs-Boson-Hunt

    oh, I guess there was.

    • If you read the article posted first, it links to the article about this 'discovery',

      Physicists have ruled out that the particle could be the standard model Higgs boson, but theorize that it could be some new and unexpected version of the Higgs.

      They knew it wasn't the Higgs everyone is looking for from the beginning (which is what the article you have posted is about)

      What you are thinking of is the LHC in Europe, whereas this story is about the Tevatron in the United States. As a result off this, now both facilities have had a review turn up this type of result for their data

  • Already known? (Score:5, Informative)

    by MurukeshM ( 1901690 ) on Monday June 13, 2011 @08:31AM (#36424016)
    What about this comment [slashdot.org] on the original /. post: D0 has done this same sort of analysis, and they do not see this bump. But, their background modeling procedure involves reweighting the expected distributions (from Monte Carlo) in delta R between the jets (sort of an angular separation between the jets), which is a variable that is strongly correlated with the dijet mass. That is, their background model would be expected to have a strong tendency to fill in a bump like this. Now, which model is more correct is open to question, but it is certainly true that whether or not this bump turns out to be from real new physics (unlikely, in my professional opinion), their procedure is almost guaranteed not to find it.
    • Re:Already known? (Score:5, Informative)

      by The_Wilschon ( 782534 ) on Monday June 13, 2011 @09:18AM (#36424330) Homepage
      That was me. In the analysis released on Friday, D0 does not perform the delta R reweighting (this was a specific criticism that they sought to address). In spite of no delta R reweighting, they still do not see the bump. There are some systematic errors that they handle differently from CDF which are quite likely to explain the result. Some of my colleagues at CDF are investigating (and were investigating before this D0 release, because of a suggestion by a D0 physicist at the release of the original bump paper) these systematics and their effect on our ability to model the data well. I can't really comment further until results are released, however.
  • True science in action showing how important repeatability is. Kudos to both teams.

    • by gclef ( 96311 )

      Except this isn't really repeatability...they're both analysing the same data from the same experiment, just in different ways with different weighting. Repeatability will come with LHC data.

      • by drerwk ( 695572 )
        Pretty sure D0 and CDF are each using their own data; different detectors means different data. Maybe the same beam runs though which would give same collision energies.
  • the Imaginaton!
  • Just say it's settled science. Telll your funding sources that there is a consensus among most scientists and call it a new particle.. Grats on your discovery.
  • I hate having to be pedantic, but please at least do enough fact-checking to get the name of one of our country's premier scientific institutions right! It's the Fermi National Accelerator Laboratory (FNAL) or Fermilab. There is no such thing as Fermi Labs.

  • How do we know that this other detector is working properly?

    • there are a multitude of very common disintegrations plus common cosmic ray events that the detector picks up, these are filtered when searching data for significant events, but they do allow calibration and verification
  • In the spirit of Schrodinger the answer is simple. They are both correct!

    One team has observed that the particle/force/energy exists! There-fore it exists... the other team has observed that the particle/force/energy does not exist! There-fore it does not exist!

    All we need to do is to get both parties to agree with each other an that will become the final state of the particle/force/energy!

    If you don't get the joke then you have a life, go, leave Slashdot, and enjoy it!

  • Was it being observed? I like the fact the results change with every observation of data.. does that data sense new observations and change suit? The implications are fascinating!

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...