Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Scientists Confirm Nuclear Decay Rate Constancy 95

As_I_Please writes "Scientists at the US National Institute of Standards and Technology and Purdue University have ruled out neutrino flux as a cause of previously observed fluctuations in nuclear decay rates. From the article: 'Researchers ... tested this by comparing radioactive gold-198 in two shapes, spheres and thin foils, with the same mass and activity. Gold-198 releases neutrinos as it decays. The team reasoned that if neutrinos are affecting the decay rate, the atoms in the spheres should decay more slowly than the atoms in the foil because the neutrinos emitted by the atoms in the spheres would have a greater chance of interacting with their neighboring atoms. The maximum neutrino flux in the sample in their experiments was several times greater than the flux of neutrinos from the sun. The researchers followed the gamma-ray emission rate of each source for several weeks and found no difference between the decay rate of the spheres and the corresponding foils.' The paper can be found here on arXiv. Slashdot has previously covered the original announcement and followed up with the skepticism of other scientists."
This discussion has been archived. No new comments can be posted.

Scientists Confirm Nuclear Decay Rate Constancy

Comments Filter:
  • Semantism (Score:5, Insightful)

    by Kilrah_il ( 1692978 ) on Saturday September 25, 2010 @05:25AM (#33695660)

    I think the proper phrasing should be "No evidence for inconsistency of nuclear decay found". It seems pedantic, but proper scientific methodology works this way. There
    can still be inconsistency in nuclear decay, just not in this test scenario. You cannot prove consistency, you con only be very, very sure this is how nuclear decay works because you performed many studies that have failed to show something else. (Not that I despute their findings).

  • How big? (Score:4, Insightful)

    by srussia ( 884021 ) on Saturday September 25, 2010 @05:39AM (#33695690)
    From TFA:“There are always more unknowns in your measurements than you can think of,” Lindstrom says.

    How big were the foil and spherical samples? Neutrinos interact very weakly, so much so that neutrino detectors need to be on the order of 1 km^3.

    Heck, if I had that much gold (whatever isotope) I'd have better ways to spend my time.
  • by Frequency Domain ( 601421 ) on Saturday September 25, 2010 @05:51AM (#33695720)
    Yup, that's the nature of statistics. In order to "call" the result with a reasonable level of certainty the sample size requirements increase with smaller signal/noise ratios.
  • by Israfels ( 730298 ) on Saturday September 25, 2010 @06:52AM (#33695816)
    This, of course, is only true under the assumption that it's the neutrinos that are really causing the increase in radioactive decay. The article does mention that there were many unknowns in the measurements. It may be something else that causes this increase, or even a combination of two. It may also be the case that more neutrinos, the rate at which they're emitted, or other interacting fields alter the effect.
  • by Kilrah_il ( 1692978 ) on Saturday September 25, 2010 @06:55AM (#33695826)

    You are right, they are purposely avoiding using the same isotopes to avoid observing the phenomenon that caused them to perform this research. "Truly, you have a dizzying intellect."
    Call me naive, but maybe they had better reasons not to use the same material. I am not a physicist, so I don't know if it's correct, but here are some reasons I thought of, of the top of my head:
    1) Gold may have more neutrino activity, so there was a better chance to observe said phenomenon.
    2) The scientists involved have more experience working with gold, so they preferred using a material they are experienced with.
    3) Gold may be easier to work with and this it is easier to construct thin foils.
    4) They had a pile of unused gold and didn't know what to do with it :)

    Again, I don't know if these are valid/correct reasons, but I'm somehow convinced there is a better reason than the one you stated.

  • Re:Semantism (Score:2, Insightful)

    by maxwell demon ( 590494 ) on Saturday September 25, 2010 @08:25AM (#33696042) Journal

    They did confirm nuclear decay rate constancy. A confirmation is not a proof. It's just what the word says: A strengthening of the claim. It makes you more confident that the claim is true.

  • by axedog ( 991609 ) on Saturday September 25, 2010 @08:52AM (#33696174)
    It is of huge significance in radiometric dating. If we can show that the half-life of a radioisotope is constant, then it increases confidence in these dating methods. Conversely, if it can be shown that decay rates vary significantly, then accurate dating becomes more difficult and merits further research.
  • by Anonymous Coward on Saturday September 25, 2010 @09:13AM (#33696250)

    Yes, they're not replicating the original experiment, but they are testing the proposed, more general, mechanism -- i.e. that seasonal variations in neutrino flux from the Sun cause variation in decay rates. That's what the original experimenters were wondering about -- does this affect everything? I suppose followup studies could do the same experiment for every element in the period table and every radioactive isotope, including the ones used in the prior experiment, but the fact remains that the new experiment with gold has narrowed the possibilities for what might be going on, and they used a rather elegant experiment to do it. Is there an effect on all isotopes? Well, apparently not within measurement uncertainties for gold-198, although a smaller variation remains possible.

    You can also turn the argument around: if the first experimenters found a real effect for Mg-54, Si-32, and Rn-226, then so what? That doesn't mean there's any effect at all on the decay rates for any other isotope, including the ones used for radiometric dating and other techniques where the observed variation (if taken at face value) would have a small effect. Maybe the process doesn't work for potassium, uranium, or rubidium, and until people test the specific isotopes in those methods, who cares if they find something odd in Mg, Si, or Rn?

    In other words, you can't fault the followup study for not testing the same isotopes if you are simultaneously claiming that the original experiment has grand implications for the decay rates of all isotopes. Well, you can, but it wouldn't be particularly consistent.

  • Re:Not enough time (Score:4, Insightful)

    by ceoyoyo ( 59147 ) on Saturday September 25, 2010 @11:10AM (#33696812)

    They're investigating the hypothesis that it's neutrinos that cause the variation. Did you even read the summary, or just the sensationalistic headline?

  • Re:Untrue (Score:3, Insightful)

    by ceoyoyo ( 59147 ) on Saturday September 25, 2010 @11:46AM (#33697000)

    I think I have a pretty good grasp of statistics, thanks.

    As I explicitly mentioned in my post, you're correct, there are different standards for "statistically significant" in different fields. Contrary to what you think, they're not particularly precise. They're basically rules of thumb and differ between fields, and even within fields, due to tradition, history, and sometimes experience. Note also that I was talking about the Slashdot headline and summary. In fact, I quoted the former. Sorry, I thought the quote made it obvious.

    The headline claims "Scientists Confirm Nuclear Decay Rate Constancy." In fact, they did no such thing. They found (weak) evidence of change. In order to confirm constancy, they would have to find significant evidence of no change.

    Why don't you go grab a cup of coffee, or maybe a nap? You're awfully cranky for a Saturday morning.

  • Re:Untrue (Score:3, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Saturday September 25, 2010 @04:22PM (#33698652) Homepage Journal

    I'm not going to bother RTFAing, but this would depend on whether the test is two-tailed or one-tailed, in terms of how to interpret the p value. Because this kind of statistical test ultimately boils down to a simple measure of the observed versus a given expected, it is also only really good at testing against H0, the Null Hypothesis. There are more complex tests which allow you to measure against multiple variables in a single test (a good approach as it, in theory, allows you to examine the interactions between variables).

    However, these all make one core assumption: that variables you're not testing out are randomly distributed. In other words, as the sample size goes up, the errors will all balance out and so the noise approaches zero. Which is entirely fair if it is indeed noise, which is why we were taught at University that you should do an analysis of variance that allowed you to test to make sure that there weren't any untested factors.

    Like I said, I've not RTFA'ed, so I can't say if they did this here, but the experiment seems to have been designed with some measure of care - better than anything I've seen from earlier stories on the subject.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...