Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Standard Model Takes A Dent 22

Anonymous Coward writes "According to New Scientist, researchers at Brookhaven NL have put a dent in the standard model of particle physics. Looks like a big deal and just what they've been waiting for - something to get their teeth into. Read the story here"
This discussion has been archived. No new comments can be posted.

Standard Model takes a Dent

Comments Filter:
  • by Christopher Thomas ( 11717 ) on Friday February 09, 2001 @09:19AM (#444649)
    Every model is incomplete. "The suggests that the Standard Model is incomplete" -- Every model is finite and can not explain complete reality, which is infinitely complex. New and better models will always be invented. Hopefully they will come closer and closer to the limit (as in asymptote) - reality.

    You are making a big assumption here - that reality is infinitely complex.

    The universe may (or may not) be infinite in _size_, but that has no bearing on the complexity of the laws that govern it.

    Behavior of the universe may look complicated, but again, this may very well be just complicated consequences of simple laws.

    I see no reason to believe that the fundamental laws governing the universe wouldn't be very simple. Complexity is usually a sign that we've missed something fundamental going on.
  • Thanks! I'm satisfied, carry on! ;)
  • This description at space.com [space.com] made me wonder how sure you can be of the muon velocity, given its importance to the experiment, as described in point 3 on the linked page. Is this considered a significant area of uncertainty? More specifically, if the muon velocity were incorrect, such that their spin was being affected by the confinement field, would that be "easily" detectable in the results?

    Good luck with your data analysis!

  • It's probably not what you're looking for, but I'm sure there are ways to contribute by writing code for processing the enormous mountains of data that these experiments produce. Some of the scientific data processing and visualization apps are GPL'd or otherwise open, for example.

    Not as much fun as generating TeV energies in your basement, though!

  • While this is interesting, the disagreement is still not so far out of the range of experimental errors (2 sigmas) - I'd have been a lot more confident this really meant an indication of something beyond the standard model if it was 4 or 5 sigmas, or even a second experiment with different techniques was corroborating the measurement. Let's just hope it doesn't turn into another of those cases where they announced prematurely and it turns out there was really nothing there after all.
  • I'd like to suggest that the relation between a theory and a paradigm isn't as binarily rigid as this post suggests. Calling the theory of evolution a paradigm suggests that it is the basis for an entire field of research, setting the criteria for what constitutes acceptable methods and topics of inquiry. Newton's Principia and Opticks count as paradigms, but they are most certainly theories as well. Evolution's merits have nothing to do with its theory- or pradigm-hood.
  • I've always believed that superstring theory is going to be closer to the truth than the standard model. The standard model doesn't account for gravity, and I find that to be a huge failing. I like to think of standard model and general relativity as good approximations, but I think eventually superstring or maybe some other theory will eventually supplant the two as the unified theory that predicts "everything". I guess we'll see.

  • Shouldn't this story make the front page? I mean, I've seen a lot more mundane stuff make it...
    --
  • They didn't use taus because:

    • taus don't live long enough to reach the necessary accuracy
    • we can't make a polarized beam of taus
    • taus have many, many more decay paths than do muons, making the analysis much much more difficult

    I'm sure there are a number of other problems, but these are the real killers.

  • by krlynch ( 158571 ) on Friday February 09, 2001 @10:13AM (#444658) Homepage

    But the calculations of the cloud of virtual particles that surround the muon are insanely difficult. I'm curious if perhaps an error may lie in wait.

    It is possible that an error lies in wait; I know that I make them all the time in these types of calculations :-)

    That said, the calculations themselves are not really that difficult in principle; the procedure is quite rigorous and very well understood. Doing the calculations by hand is tedious and error prone, but most of these calculations today are automated. And they are usually done independently by multiple small groups of theorists, so that the results can be cross checked, and critiqued by interested observers. The odds of a major error hiding in the theoretical papers is very small. (There are of course some caveats, but they are technical and uninteresting, and they are, I believe, included in the theoretical error bars.

    I think it might be a little early to begin the last rights for the standard model.

    This is probably a good statement: the important thing to note about this new result is that it doesn't quite reach the level of scientific "certainty" (much like the noise from a few months ago out of LEP concerning the Higgs boson). This result differs from the Standard Model result by "2.6 sigma", whereas "scientific confidence" requires 3 sigma, and "scientific certainty" requires 5 sigma (which is MUCH MUCH stronger than 3 sigma, not the piddling difference it sounds like it is). What is truly interesting about this result is that, for the first time, we have a reliable result which differs from the SM result by so much. Get the paper and look at the last figure. If the experiment reaches its ultimate goal, and the central value doesn't move towards the SM value by too much, their ultimate result will definitely be greater than 3 sigma, and will probably exceed 5 sigma.

    THAT's when we really rejoice :-)

    they chose the muon for a reason

    Actually, they chose it for a couple of reasons:

    • the muon is a "lepton", so it doesn't feel the strong force. The strong force at low energies (like this experiment) is "non-perturbative" which makes the calculations much much tougher (for example, we still can't calculate the equivalent number for the proton with any certainty). Thus, if you work with leptons, you minimize the contributions from the strong force to the number you are measuring.
    • it is easy to make polarized muons, which is a technical, but important property for this experiment.
    • muons are almost stable: they live for about 64 microseconds in this experiment. Now, that may not sound like much, but given that most of the particles we study survive for 10^{-20} seconds or less -- that is the difference between a day and 10 times the age of the universe -- so, effectively, they live forever.
    • they can only decay into electrons. This makes parts of the experiment really really easy, since they don't have any uncertainty from identification of the decay products.
    • And finally, the most important reason: no one can figure out how to use anything else to do this measurement :-)
  • I gotta admit, that was my first thought when I read this. I went to university for physics thinking I would go into particle physics and maybe do exactly this...didn't work out that way, but boy does the excitement stay.

    Only a little OT: It's a shame that there isn't something amateur scientists could do that would be truly useful to particle physics. Amateur astronomers can watch for comets, novas, variable stars; birdwatchers can track migration patterns, watch for species far away from home or count local populations. I'd really like to be able to do something similar for particle physics, but while building my own cloud chamber would be really neat, the impression I get is that it would be just that: really neat, and not at all useful to science as a whole -- not when TeV accelerators are needed to really crack barriers...

  • Whoah, hey everyone, thanks for the suggestions. Some *very* innaresting thoughts there.
  • But the calculations of the cloud of virtual particles that surround the muon are insanely difficult. I'm curious if perhaps an error may lie in wait. Appearently, their paper was only submitted to Phys. Rev. Letters Febuary 8th
    The paper is here [bnl.gov]. If you check the references, the theoretical calculation (done by someone else) dates back to 1999. This kind of calculation was first done in the 1950's, so I think it's pretty well understood. They give a range of uncertainty on the theoretical value, and it's not significant compared to the statistical error bars in the experiment.

    Tau would have produced a more measurable result (I assume), but crunching the numbers on it might be a nightmare
    In the paper, they say that the effect scales as the square of the mass, so yes, the tau would have produced a bigger effect. I'd guess the reason they didn't use taus is simply that their accelerator didn't have enough energy to produce taus. I don't see why "crunching the numbers" would be an issue. If you have a computer program set up to calculate the g-2 of the electron or muon, then I think all you should really have to do is change one variable to calculate g-2 of the tau. Anyhow, this is an experimental paper. The relevant calculations have been understood for a long time.
    The Assayer [theassayer.org] - free-information book reviews

  • It's a shame that there isn't something amateur scientists could do that would be truly useful to particle physics.

    One thing you might look at are lattice QCD calculations. People have figured out a way to do some low-energy strong force calculations (the ones that are traditionally very hard), but they take a lot of computing power, and some very impressive super-computers are being built to deal with them. I have no idea how much work has been put into it, but it might be very useful to produce a distributed system to do these calculations. Volunteers could contribute code and processor time.

  • I couldn't help but notice the degree of finese required. That's fine, I'm no stranger to statistics. But the calculations of the cloud of virtual particles that surround the muon are insanely difficult. I'm curious if perhaps an error may lie in wait. Appearently, their paper was only submitted to Phys. Rev. Letters Febuary 8th. I think it might be pretty neat to add some other exhibits to the zoo, but I think it might be a little early to begin the last rights for the standard model. Sure it's time is limited, but it's not up.

    But on the other hand, they chose the muon for a reason.... Tau would have produced a more measurable result (I assume), but crunching the numbers on it might be a nightmare.... I guess it all boils down to how concrete the expectations are for the muon's interations with virtual particles. I guess that makes me curious, anyone know the answer?

  • It's worse than that. None of physics even attempts to explain the subjective perceptual experience. Even if it included gravity in a nice, tight package, they've still got a long way to go.
  • Well, this is close, but not quite right. (I'm a graduate student on the muon g-2 experiment; in fact, I'm in the control room waiting for beam now.)

    First of all, we measure two quantities: the anomalous precession frequency of the spin (that is, how fast the spin rotates with respect to the momentum of the muons) AND the strength of the magnetic field in which the muons are stored. g-2 is proportional to the ratio of these two values. The other factors in the equation are fundamental constants that have also been measured very precisely. There are really very few theoretical assumptions embedded in the measurement itself (some single-particle relativistic electrodynamics of spin motion), so we aren't working backwards.

    Second, the Standard Model consists of more than QED! At the level of statistics that we have, there are significant contributions from hadron loops (the strong interaction) and from W and Z bosons (the weak interaction).

    Our "confidence level" is between 98 and 99 percent, a 2.6 standard deviation discrepancy. At this level, we do not claim that we have made a discovery. We've only found a very suggestive hint that there might be a discrepancy. In fact, although it's been fun, I would say that we've almost gotten an embarassing amount of publicity out of our result.

    We don't test superstring theory at all. However, the more popular supersymmetry models can potentially lead to perturbations at the scale of our measurement.

    Thanks,
    Fred

  • As a participant in this experiment, I agree with your sentiment entirely. (One correction, though: it's actually a 2.6 sigma effect. This makes a rather significant difference in the associated probability.)

    We have the data on tape to reduce the error estimate by about a factor of 2. Analyzing this data is going to be my thesis topic, and I look forward to sharing the result with you in a year or so.

    By the way, one aspect of our analysis that hasn't been made apparent in some of the press reports is that it was done semi-blind. We measure the precession frequency and the magnetic field strength separately. Without both numbers, you can't get the physics result. Different teams analyze the two sets of data independently. Until the very end, hidden offsets are added to the numbers. This way, we can't bias the result by working towards (or away from!) a particular value.

  • Good question! Let me quote the point you're referring to:
    So that the muons don't spiral up or down and out of the ring, an electric field is used to confine them. The electric field could also affect the spin, except at a "magic" speed where the electric-field effect vanishes. This interaction of the muon spin and the electric field is a specific consequence of Einstein's special theory of relativity. The experiment is performed with muons at this magic speed, namely 99.94 percent the speed of light.

    The first answer to the question is that the magnetic field of the storage ring is chosen to store particles at the magic momentum. This field is very precisely calibrated and monitored, since it appears in the denominator of the expression by which we calculate the anomalous moment from the precession frequency. Particles that are moving much slower or faster spiral in or out of the ring without being stored.

    We measure the momentum of the muons using what we call the "fast rotation." The muons are injected into our storage ring in a tight bunch. At early times after injection, the bunch structure modulates the time spectrum that we see in the detectors. We can measure the times at which we see the bunch pass by and from this determine the momentum distribution. The central value is correct.

    Finally, the finite momentum spread (about 0.6 percent) of the beam means that some of the particles do have an momentum which differs from the magic momentum. We apply a correction for this; we determine it from computational simulations of the beam dynamics. The magnitude of the correction is about 0.5 parts per million (ppm), with an error estimate of less than 0.1 ppm. (Our result has an error estimate of 1.3 ppm, so the correction is small on this scale.)

    Thanks,
    Fred

  • is incomplete. "The suggests that the Standard Model is incomplete" -- Every model is finite and can not explain complete reality, which is infinitely complex. New and better models will always be invented. Hopefully they will come closer and closer to the limit (as in asymptote) - reality.
  • by buga ( 314334 ) on Thursday February 08, 2001 @09:03PM (#444669)
    Even with the great experimental success of the standard model, physicists have known from the start that it was not complete. It has at least 19 arbitrary parameters. This is very far from most physicist dreams of a single coupling constant that governs every interaction in the universe. Note that the term "standard model" was used instead of "standard theory", when in fact it is really a scientific theory in every sense, to acknowledge the presence of these numerous free parameters.

    Of course, even before this so called "dent", there is the fact that there is insufficient experimental data to confirm observation of th Higg's boson. But the previous success of the standard model leads us to believe that this confirmation will eventually come. And naturally, it was expected that at higher eneries the standard model will need to be replaced with something more general.

    - Sim.


  • Don't give up hope yet:

    TABLETOP LASER ACCELERATORS ARE BRIGHTER AND FASTER
    Physics News 510, November 1, 2000
    http://newton.ex.ac.uk/aip/physnews.510.html

    Table-Top Fusion
    Significant Physics on a Small Scale
    Creating fusion used to be best left to suns and high-priced devices. Now scientists have managed it with a mere million-dollar machine.
    http://www.abcnews.go.com/sections/science/Daily Ne ws/tablefusion990324.html

    Yankee Ingenuity: Dartmouth Physicists Convert A Microcope Into A Free-Electron Laser
    http://www.sciencedaily.com/releases/1998/11/981 11 2075829.htm

    Such "tabletop" accelerators are currently in the million dollar range. But given the interest of physicists to make use of such devices, the cost will probably in the 10 to 100 thousand dollar range within 10-20 years.

    Bob Clark

Been Transferred Lately?

Working...