Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Australia Science Technology

Single-Ion Clock 100 Times More Accurate Than Atomic Clock 169

New submitter labnet writes with this excerpt from news.com.au: "University of New South Wales School of Physics professor Victor Flambaum has found a method of timekeeping nearly 100 times more accurate than the best atomic clocks. By using the orbit of a neutron around an atomic nucleus he says the system stays accurate to within 1/20th of a second over billions of years. Although perhaps not for daily use, the technology could prove valuable in science experiments where chronological accuracy is paramount, Prof Flambaum said."
This discussion has been archived. No new comments can be posted.

Single-Ion Clock 100 Times More Accurate Than Atomic Clock

Comments Filter:
  • yeah but (Score:4, Funny)

    by Anonymous Coward on Tuesday March 13, 2012 @08:32AM (#39337925)

    until it comes with indiglo i don't want it

  • Eventually... (Score:5, Interesting)

    by Anonymous Coward on Tuesday March 13, 2012 @08:35AM (#39337947)

    Eventually you'll be so accurate that walking by the thing will cause enough relativistic distortions that you can no longer claim to have any accuracy at all.

    • by Bowdie ( 11884 ) on Tuesday March 13, 2012 @08:43AM (#39338045) Homepage

      Grr! You changed the clock by observing it!

      Damm kids!

      • by K. S. Kyosuke ( 729550 ) on Tuesday March 13, 2012 @10:07AM (#39338937)
        The summary is misleading at best anyway:

        Although perhaps not for daily use, the technology could prove valuable in science experiments where chronological accuracy is paramount, Prof Flambaum said.

        As the different series of Star Trek have already shown us, the words "chronological accuracy" and "Paramount" do not belong to the same sentence, much less do they deserve to be joined by the copula.

      • Don't forget everyone also has to agree on a where the clock should be as relativistic effects creep in. Put that clock at the equator and compare it to one at one of the poles after a few decades, uh oh!

    • Re:Eventually... (Score:5, Informative)

      by gomiam ( 587421 ) on Tuesday March 13, 2012 @08:44AM (#39338063)
      It's even worse. IIRC, current atomic clocks are now so precise that stacking one on top of the other (say 20cm distance) is enough to make them start drifting due to the different gravitational field strength.
      • by Yvan256 ( 722131 ) on Tuesday March 13, 2012 @08:47AM (#39338093) Homepage Journal

        So what you're saying is that by stacking a few dozen alarm clocks on top of each other, I can get one more hour of sleep?

        Cool!

      • by necro81 ( 917438 )
        Stacking any two clocks on top of each other would cause them to drift due to relativistic effects. The only reason atomic clocks are special in this regard is that you can actually measure the effect over the course of something less than a few million years.
        • by msauve ( 701917 )
          "Stacking any two clocks on top of each other would cause them to drift due to relativistic effects."

          It depends on where their timebase is. If you have a clock which receives time/frequency from GPS or WWV*, or the AC power line, for example, it won't matter.

          That might bring up the definition of "clock." I supposed one might argue that a clock must be self-contained, but most people would agree that their clocks are clocks, and many are driven by the AC powerline.
        • by dak664 ( 1992350 )

          Stacking two clocks on top of each other would cause them to phase lock due to every known experimental defect.

      • Isn't the rotation of the earth, shifting of the continental plates, movement of their earth around the sun, and any other movement throwing off the clock. Actually how does one define "not moving". Moving is always relative to something else. If I stand still, I'm not moving relative to the ground, but I am moving relative to the sun, which is moving relative to the galaxy, which is moving relative to all the other galaxies. Is there a scientfic definition of "not moving" that doesn't use other objects a
        • by drerwk ( 695572 )
          See http://en.wikipedia.org/wiki/Cosmic_microwave_background_radiation#CMBR_dipole_anisotropy [wikipedia.org] for my preferred choice of movement relative to the universe. I'll leave it to you to decide if CMB rest frame is good enough.
        • No, at least according to Special and general relativity, there is no preferred direction to the universe, and there is no such thing as "absolute still". There's no way to not move in a universe where the space itself is moving as well.

          Movement must always be defined in relative terms, since general relativity is background independent.

          Similarly, when dealing with particles, there's no "absolute still" since that is the same as absolute zero, which is an asymptotic physical limit to the temperature.

    • Re:Eventually... (Score:5, Informative)

      by huge ( 52607 ) on Tuesday March 13, 2012 @08:53AM (#39338163)
      As the old saying goes: "A man with one clock knows what time it is. A man with two clocks is never sure."
    • I could be wrong but isn't the definition of one second based on some atomic phenomenon? (All the sloshing water and wind makes the revolution of the planet a non-starter...)

      How can a new method be more accurate than the method we use to define time?

      • Because the measurement used to define time drifts slightly.

        The second used to be defined based on the Earth's rotation, but cesium atomic clocks became so much more accurate than the earth itself, that the standard was changed to be based on the behavior of a cesium atom. The standard can always be changed again.

      • by vlm ( 69642 )

        I could be wrong but isn't the definition of one second based on some atomic phenomenon? (All the sloshing water and wind makes the revolution of the planet a non-starter...)

        How can a new method be more accurate than the method we use to define time?

        jitter phenomena. Aka phase noise. You'd like to think something like a Rb clock watches exactly one atom and counts that single atom, but its a lot more analog than that.

        Man you has one clock knows what time it is, as you say. Man who has two clocks has no freaking idea what time it is. Man who has at least three clocks and lets ntpd or equivalent do its thing for a couple days/weeks has excellent idea what time it is and how accurate each clock is relative to "the group".

      • by msauve ( 701917 )
        "How can a new method be more accurate than the method we use to define time?"

        Because the current definition [bipm.org], based on a hyperfine transition of electrons in the Cesium atom, cannot be practically realized. The "definition refers to a caesium atom at rest at a temperature of 0 K." Neither of those conditions can be realized in the real world (there's gravity, and electromagnetic fields, etc.), and corrections are imperfect.

        The new method discussed in the article, allows one to realize a better performing
    • at which point will use it for a RND seed generator

    • It will still maintain that accuracy, just within its own inertial frame of reference. As long as the scientists using the clock are able to account for this in their experimental setup and analytical models then they should be able to retain that accuracy.
  • by gcnaddict ( 841664 ) on Tuesday March 13, 2012 @08:36AM (#39337967)

    Although perhaps not for daily use, the technology could prove valuable in science experiments

    You kidding me? The prospect of GPS-guided bullets accurate to the millimeter will have the US military pursuing this in next-gen GPS satellites as soon as the technology is viable. Hell, this'll be the most valuable update to military hardware in decades.

    • Although perhaps not for daily use, the technology could prove valuable in science experiments

      You kidding me? The prospect of GPS-guided bullets accurate to the millimeter will have the US military pursuing this in next-gen GPS satellites as soon as the technology is viable. Hell, this'll be the most valuable update to military hardware in decades.

      I really don't think the distance a GPS-guided bullet travels will require the additional accuracy provided by this new clock. If your target is moving so fast that you need more accuracy than an atomic clock provides then you shouldn't be using a bullet.

      • by rossdee ( 243626 )

        I think that a laser guided bullet would be more likely to work given that GPS targeting is not so good at hitting moving targets.

    • Although perhaps not for daily use, ...

      You kidding me? The prospect of GPS-guided bullets accurate to the millimeter ....

      Snipe much?

    • by Pope ( 17780 )

      Have you been watching "Runaway" again?

  • by account_deleted ( 4530225 ) on Tuesday March 13, 2012 @08:40AM (#39338019)
    Comment removed based on user account deletion
  • Link to actual paper (Score:5, Informative)

    by foo1752 ( 555890 ) on Tuesday March 13, 2012 @08:42AM (#39338039) Homepage
    • Hmm, It looks like the clock isn't as accurate as claimed..

      It's based on measuring a single atom of Th229 which has a half life of 7340 years.. So every so often your new fancy ion clock is going to randomly drop dead. (Unless you have multiple units and are comparing the outputs..) Then you need to isolate a steady supply of ionized Th-229 (which is a decay product from U-233, 160KY) to repair the dead modules.

  • by michelcolman ( 1208008 ) on Tuesday March 13, 2012 @08:43AM (#39338043)

    And here I was, thinking that neutrons were inside the nucleus and electrons were orbiting around it. What's going on here? How can a neutron orbit a nucleus? It's an actual question, I know the atomic models I was once taught are way out of date (by a couple of centuries, probably), but I never heard of neutrons orbiting nuclei.

  • why cant i buy a wristwatch with this technology built in it?
  • by Greyfox ( 87712 ) on Tuesday March 13, 2012 @08:47AM (#39338097) Homepage Journal
    It's very easy to fuck it up, as we saw with the FTL neutreno experiment a few months ago. I've seen a lot of business requirements specify that level of precision because they think it would be cool and it just turns into a nightmare later. Hell, you're lucky to agree within tens of seconds. Take POSIX (PLEASE! Heh.) POSIX specifies that time measured in seconds from midnight, Jan 1, 1970 UTC. Seams easy enough right? Well it turns out UTC specifies accounting for leap seconds, so you should subtract 33 seconds (IIRC) over the course of those 42 years. POSIX also specifies that leap seconds not be accounted for. Brilliant! Then it's not UTC! Now here's where it gets fun! The Linux kernel may or may not actually handle leap seconds, depending on how you configure it. And what happens if you're syncing off NTP? Or GPS? It's a problem if you need to convert to TAI or TDT. If you adjust for leap seconds and your system doesn't measure them, you could end up being over 60 seconds wrong versus what time it "really" is. When you're trying to communicate with a satellite going 2000 miles a second, that's a problem. Because you'll be pointing you're antenna over there, and the satellite's really over here!

    It'd be nice if some physics professor *cough* could solve those problems before making some shit that can be accurate for a billion years! See what I did there? That was just passive aggressive right there, wasn't it? Too much Portal, lately...

    • It'd be nice if some physics professor *cough* could solve those problems before making some shit that can be accurate for a billion years!

      Why would they? Physics professors typically don't care what time it is. They only care about how long it's been since event "X". There is clear evidence for this in how they always seem to come late to class. Damn those leap seconds.

  • I can't even get my atomic watch to set properly from the time signal that exists now.

    I must be too far from Denver for the signal to get to my watch. Which sucks, since it defeats the whole purpose of having that.

    • by tibit ( 1762298 )

      Your watch is probably doing a rather lame decoding of the signal. Good receivers directly digitize the incoming signal, do filtering and demodulation numerically, and can correlate it with a model signal over minutes or even hours to get a lock. A friend of mine, a real RF nerd, has made such a receiver and it works where you can't even see the damn signal on a spectrum analyzer, with a decent antenna, on the narrowest bandwidth setting (10 or 15Hz IIRC). I think it routinely worked for him when he was sta

      • Your watch is probably doing a rather lame decoding of the signal.

        Yeah, I figured that part out. :-P

        It's a relatively inexpensive Casio, so it's not like I expected a great amount of technology.

        Was just a little bummed that it has rarely (if ever) been able to set from the atomic signal -- that was supposed to be the cool part, and what I could use as a baseline to keep my other watches set correctly.

        • by tibit ( 1762298 )

          I don't know how much help it'd be, but make sure the watch is near a window or, less ideally, the outermost wall, closest to Denver. It should attempt synchronization past midnight CST, when the station has greatest coverage and thus strongest signal. During the day you can pretend the signal is not there, because if that watch can't sync at night, it'll be hopeless during the day.

          • I don't know how much help it'd be, but make sure the watch is near a window or, less ideally, the outermost wall, closest to Denver.

            Yeah, tried standing in the back yard facing mostly west, but not much luck. But, that was during the day.

            Sadly, no window faces the right direction that is helpful here.

            Trying to sync after midnight CST (isn't Denver MST?) might help. It's not the end of the world ... it hasn't worked yet. :-P

    • by fnj ( 64210 )

      Agreed. Sheesh, you'd think they could afford to raise the power of the signal, or add more sites.

  • by Viol8 ( 599362 ) on Tuesday March 13, 2012 @08:52AM (#39338153) Homepage

    If an atomic clock is your most accurate timepiece then how on earth can you tell if something is more accurate?

    Can someone explain?

    Also , given that a second is defined in terms of the ceasium atom as used in atomic clocks then surely anything that deviates from this is by definition LESS accurate (if you see what I mean)?

    • by rossdee ( 243626 )

      If the accuracy is defined as fractions of a second over billion years - how do they know its going to last a billion years

      • by vlm ( 69642 ) on Tuesday March 13, 2012 @09:35AM (#39338583)

        If the accuracy is defined as fractions of a second over billion years - how do they know its going to last a billion years

        Run the reciprocal and test your frequency. You know that saying about how in europe they think hundreds of miles (err KM) is far away and hundreds of years is recent, but in the US they think hundreds of miles is a daily commute and hundreds of years is ancient? Well billions of seconds is a long time, but billions of cycles per second is actually medium to low frequency in the RF world now a days, depending I guess on industry (that would still be considered kind of fast in the PLC/VFD field, but truly ancient great-grandfatherly stuff in the radar world)

        So you've got three atomic clocks (now a days a ebay special Rb clock is about $100 surplus) and use that to drive three sets of ham radio microwave experimenters gear at 10 GHz (which is not cutting edge anymore). Hmm. 10 billion hz. suddenly fractional parts per billion becomes fractional hz which a piano tuner has no real problem detecting.

        This isn't exactly how it works, but as a thought experiment you hook up your 10gig ethernet and drive it with this clock and hack the driver for variable length packets... If you think you have better than 0.1 ppb clock, then you should be able to transmit a billion bit packet and not fall out of frame sync (which at 10 gigs only takes a tenth of a second). This is not exactly the modulation method used by real 10gigE and not exactly how you test it, but it within the realm of the general idea.

        Good luck doing modern ham radio stuff like bouncing microwave signals off the moon using the more exotic low SNR digital modes without at least PPB level frequency accuracy. Freq stability is a factor at 10 GHz until at least 10e-9 for that kind of work... luckily 10e-11 is cheap and off the (ebay) shelf for $200 or so GPSDO or old Rb oscillators.

    • by tibit ( 1762298 ) on Tuesday March 13, 2012 @09:30AM (#39338539)

      The same way it always was. Think of how you'd do it in any sort of mechanical measurements. You don't need the same level of accuracy to determine that something is more accurate. Most measurements have nice properties that must hold when you repeat the measurements, such as linearity. All you have to do, then, is to use the assumedly more accurate device to characterize the errors of a less accurate one. If you can reproduce your results and various expected properties hold, then there's no other explanation but that your new device is in fact more accurate.

      The deal with the caesium atom is that it only defines a second to a certain accuracy. If you have a better time reference, it's not by definition less accurate, it's just that your standard has accuracy only to so many decimal digits and when you're past that you must get a better standard. You can use the better reference to characterize the inaccuracies in your standard (say various drifts, phase noise in case of time references, etc). Eventually, you redefine the second using the better standard, and you do it pretty much by appending some arbitrarily chosen digits to the new definition that reproduces the old one. They had second defined however, then they measured it using the caesium clock, got a bunch of results, averaged them, and said: that's the new second. A whole bunch of digits of the new definition were pretty arbitrary -- they original definition wasn't able to provide you with stable digits all the way. Same thing will happen again: the new clock will be used to measure the cesium one, and they'll average things and the new second will be a few orders of mangnitude more cycles of this nuclear clock; it will be matching the old clock within the old clock's accuracy, but the now-added digits will be entirely arbitrary. This is how it has happened with pretty much all the other measurements (distance, weight, etc).

    • 1) You make two and see by how much they differ after a certain time. (Further reading, see Allan variance.)
      2) As with all the base units, we must 'define' the second in terms of something physical, which we can measure, so that we can use this abstract idea in the real world. This real-world embodiment is imperfect, and it is an engineering challenge to make something which better approximates the idea. For illustration, consider the kilogram, which is defined by a lump of metal in Paris. In principle,
    • by Guppy06 ( 410832 )

      The most accurate timekeeper is actually a battery of atomic clocks, with an average taken (after all known relativistic distortions are accounted for), called TAI. If your new clock hews to that average better than the individual atomic clocks used to generate that average, it's more accurate.

      • Wouldn't an average between all those clocks raise the standard deviation of the (granted) extremely precise measurement?

        Wouldn't be better to just use a single clock?

        • by pavon ( 30274 )

          No, the average of independent measurements has a lower variance [wikipedia.org] than the individual measurements.

          • Wow, amazing. I'd expect the average of all those normal curves to result in a fattier normal curve. I'm still wondering if a simple shift in the time axis means they are uncorrelated, though.

            I guess I'll have to reread a few books on quality management...

    • Well since the second is defined based upon atomic oscillation now,
      secÂond 1 (sknd)
      n.
      1. Abbr. sec.
      a. A unit of time equal to one sixtieth of a minute.
      b. The time needed for a cesium-133 atom to perform 9,192,631,770 complete oscillations. See Table at measurement.
      I can only see this as less accurate!
  • by NEDHead ( 1651195 ) on Tuesday March 13, 2012 @08:52AM (#39338157)

    It's about time

  • Preprint on arXiv (Score:4, Informative)

    by eis2718bob ( 659933 ) on Tuesday March 13, 2012 @09:01AM (#39338261)
    A preprint is available on arXiv at http://arxiv.org/abs/1110.2490 [arxiv.org]

    A nuclear transition in triply-ionized 229Th has been found which is particularly insensitive to external magnetic fields and electron configuration, which gives the potential for a very stable clock,several orders of magnitude better than current clocks if phase comparisons can be made across a scale of days or weeks. The transition energy is at 163nm (in the ultraviolet). To take advantage of this clock an extremely stable laser at this wavelength (using current best clocks) will need to be created.
  • How can you tell?? (Score:4, Interesting)

    by mooingyak ( 720677 ) on Tuesday March 13, 2012 @09:06AM (#39338319)

    I've always wondered, with regard to the accuracy of clocks like this, how can you actually tell how accurate it is?

    • Essentially when they say "clock" what they mean is "stable oscillator" -- they have a source of (in this case) ultraviolet light whose frequency varies hardly at all. Since this is purely theoretical exercise, they are simply calculating how much stray electric and magnetic fields and other problems would be expected to vary the frequency. To check experimentally, I think they'd need two such sources and then see how the relative phase of the light changes over time (after allowing for relativitistic effec

  • It's an exciting idea, and it's streaks ahead of 'traditional' microwave transition atomic clocks. These do not represent the state of the art, however, for which one should look at the experimentally demonstrated ~9e-18 accuracy by the Wineland group at NIST http://arxiv.org/abs/0911.4527v2 [arxiv.org] ; http://www.nist.gov/physlab/div847/grp10/ [nist.gov] , or the Strontium ion clocks at NPL (Teddington, UK) Essentially, the higher the frequency, the more clicks you get in a certain time, and the more accurate your clock can
  • by hcs_$reboot ( 1536101 ) on Tuesday March 13, 2012 @09:32AM (#39338555)
    Can't find one even myself! Sounds like it's no fun anymore :-|
  • by sootman ( 158191 ) on Tuesday March 13, 2012 @09:37AM (#39338627) Homepage Journal

    If Server A has 90% uptime and Server B has 99% uptime, that does not mean that Server B is up 10x more than Server A, even though Server A is down 10x more than Server B. In fact, Server B is only 10% better than Server A. Or, 1/10 as bad.*

    So, while the old clock may drift 100x more than this new one in a certain amount of time, or this new one might last 100x longer before drifting a certain amount (or whatever--the .au article is total puff and I don't care enough to look at the source), it is almost certainly not 100x more accurate. At best, it's 1/100th as inaccurate.

    * The difference between 36 days of downtime per year versus 4 days might be the difference between "useful" and "completely worthless", making Server B 100x better, but that's not what we're measuring here.

  • It all seems like an unnecessary gain.

    Kind of like choosing a car that can reach 210mph over one that can only do 150mph when the national speed limit is only 70mph.

    Yes, I know the figures don't show 100x but it just seems that it's pointlessly better than the currrent best clock which is already better than most people would ever need.

    • by bkaul01 ( 619795 )

      From TFS:

      Although perhaps not for daily use, the technology could prove valuable in science experiments where chronological accuracy is paramount, Prof Flambaum said."

      This isn't intended for "most people," but for very precise scientific experiments.

  • by Myopic ( 18616 ) * on Tuesday March 13, 2012 @10:24AM (#39339173)

    So, honest question, how do you measure the accuracy of the world's most accurate clock? I mean, what do you measure it against?

    • by guises ( 2423402 )
      You measure it against itself. Use your clock to measure the duration of a repeating event. The event itself is unlikely to be perfectly regular, but if the irregularity of the event is known than this is unimportant, you can compensate for that. Now look at the deviation of your measurements - if the deviation is small then the clock is accurate.
  • Come on guys, more opportunities for Faster Than Light travel. May be not yet for mortals but for the particles shot through a tunnel in alps, all it takes is a few bad connections and some inaccurate clocks, and superluminal speed becomes a reality.

Technology is dominated by those who manage what they do not understand.

Working...