Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science Technology

How To Build a Quantum Telescope 60

KentuckyFC (1144503) writes "The resolving power of telescopes is limited by the diffraction limit, a natural bound on resolution caused by the way light diffracts as it passes through a lens. But in recent years, physicists have worked out how to use quantum techniques to beat the diffraction limit. The trick is to create a pair of entangled photons, use one to illuminate the target and the other to increase the information you have about the first. All this is possible in the lab because physicists can use their own sources of light. Indeed, last month, physicists unveiled the first entanglement-enhanced microscope that beats the diffraction limit. But what about astronomy where the light comes from distant astrophysical sources? Now one physicist has worked out how to use quantum techniques to beat the diffraction limit in telescopes too. Her idea is to insert a crystalline sheet of excited atoms into the aperture of the telescope. When astrophysical photons hit this sheet, they generate an entangled pair of photons. One of these photons then passes through the telescope to create an image while the other is used to improve the information known about the first and so beat the diffraction limit. Of course, all this depends on improved techniques for increasing the efficiency of the process and removing noise that might otherwise swamp the astrophysical signal. But it's still the early days in the world of quantum imaging, and at least astronomers now know they're not going to be excluded from the fun."
This discussion has been archived. No new comments can be posted.

How To Build a Quantum Telescope

Comments Filter:
  • > excluded form the fun

    from* the fun.

    • by Ken_g6 ( 775014 )

      Also, it's "early days in the world quantum imaging". So either this only works for seeing exoplanets, or the word "of" is missing.

      • by Theovon ( 109752 )

        And of course, everyone else on slashdot waits with baited breath to see you and your ilk post grammar complaints. Surely, nobody just gets over the minor typos and actually concentrates on the article, which (unlike so many other articles) is actually really interesting news for nerds.

        What astounds me is the arrogance of some people who seem to imply by their behavior that they believe that they themselves never make mistakes. I would assert that losing sight of the forest for the trees (what’s mor

    • by Soulskill ( 1459 ) Works for Slashdot

      I've updated to fix. Thanks.

  • Mind blown (Score:4, Interesting)

    by dargaud ( 518470 ) <slashdot2@nOSpaM.gdargaud.net> on Monday April 07, 2014 @08:23AM (#46682847) Homepage
    I already had my mind blown when active mirrors managed to get rid of turbulence. But this is another thing entirely, getting rid of diffraction !?!
  • by Lumpy ( 12016 ) on Monday April 07, 2014 @08:24AM (#46682857) Homepage

    Please Please, call the control system "ziggy".

  • by Anonymous Coward on Monday April 07, 2014 @08:42AM (#46683009)

    The field being produced by the telescope optics is still the same, as the same primary mirror, secondary, etc. is being used to form the image. Yes, you can use multiphoton processes, even ones that are promoted by entangled photons, to produce apparently narrowed two-photon wavefunctions. However, this two-photon wavefunction is still derived from the ordinary resolution field created by the telescope optics. Therefore it seems to me that little is to be gained by using photon entangled detection to augment a process of image formation that is still fundamentally limited by the telescope aperture size.

    Similar arguments have been used for other forms of imaging (e.g. microscopy and optical coherence tomography) and they all have this issue as the image formation process is still essentially a linear scattering process. There was some excitement around quantum lithography, however, even that has the problem that the probability of two-photon processes can be quite small even with entangled photons. For inherently multiphoton processes, such as two-photon absorption, stimulated Raman scattering, etc. there may be an advantage of increasing resolution and lowering dose, but I don't see much of a benefit to improving an instrument where the image formation process is a linear imaging process.

    • by dak664 ( 1992350 ) on Monday April 07, 2014 @09:44AM (#46683631) Journal

      Yes, and what's more diffraction causes no fundamental limit to resolution, it just happens to be the distance between the first zeroes of an interference function. For two point sources of equal intensity that leads to an easily seen contrast difference of around 25% but trained observers can detect 5%. On electronic displays the contrast can be cranked up arbitrarily.

      The fundamental limit to resolution is signal-to-nose.

      • by Arkh89 ( 2870391 )

        No, there is a limit to resolution as your PSF (Point Spread Function, the image of a quasi-perfect point), is not a point but a pattern with finite size. Which means that the OTF/MTF couple has a limited support and your system CANNOT deliver any spatial frequencies beyond certain point (1.0/(Lambda F), where F is the F-Number of your system).

        • by dak664 ( 1992350 )

          That's the infinite plane wave approximation for lattices of infinite extent. Scattered spherical waves from finite objects will result in some energy passing through the aperture for every spatial frequency. Although it could be difficult to sort out which frequencies are contributing (aliasing). Analysis of the through focal series can do that, also changing the convergence of incident illumination.

          But if the source is known to be two points, accurate measurement of the spacing between the resulting PSF

      • You're confusing two different problems, namely locating a point source on a dark background, and wide-field imaging. The former can be improved beyond the diffraction limit, while the latter can't.

        • by amaurea ( 2900163 ) on Monday April 07, 2014 @04:44PM (#46688205) Homepage

          The effect of a telescope's point spread function [wikipedia.org] is to convolve [wikipedia.org] the image. A raw image f(x) is turned into f'(x) = int dy f(x-dy) g(y), where g(dx) is the point spread function. By the convolution theorem, a convolution is simply a product in fourier space [wikipedia.org], so F'(k) = F(k)*G(k), where uppercase functions are fourier-transpformed versions of lowercase ones, and k is the wavenumber [wikipedia.org]. From this you can see that recovering the raw, unblurred image (i.e. overcoming the diffraction limit), is just a question of computing F(k) = F'(k)/G(k), i.e. dividing by by the point spread function in fourier space, or deconvoluting it in normal coordinates.

          What limits our ability to do so is the presence of noise in the final image. So a more realistic data model is F'(k) = F(k)*G(k) + N(k), where N is the noise, and when we now try to remove the point spread function, we get F_estimate(k) = F'(k)/G(k) + N(k)/G(k) = F(k) + N(k)/G(k). So we get back our unblurred original image, but with an extra noise term. And since G(k) usually falls exponentially with k [wikipedia.org] (meaning that high wavenumbers = small scales in the image are progressively blurred), N(k)/G(k) will grow exponentially with k, making the noise ever greater as one tries to recover smaller and smaller scales. This puts a limit on how small scales one can recover. But as you can see, it is not given just by G(k). I.e. not just by the diffraction limit. It is given by a combination of the telescope's noise level and the point spread function. In the absence of noise we can fully recover all the small scales. And even with noise one can push down a bit below the diffraction limit with enough data. But due to that exponential rise of the noise with wavenumber, it's an extreme case of diminishing returns. It is much cheaper to make the telescope bigger.

          • I didn't make this explicit, but nowhere does one need to assume that one is looking at a point source (though that helps). I use this for looking at the cosmic microwave background, which is as wide-field as it gets.

            • Then that particular frequency cannot be recovered. But this usually only happens at zero crossings, which make up a vanishingly small fraction of the frequencies involved. Of course when noise is also present, then it's enough for G to be very small rather than exactly zero, and that would kill all the higher frequencies.

              • by Arkh89 ( 2870391 )

                Look closely at the first graph of the MTF in my previous link (in the EN page, top of the page). It represents your G function for a perfect system (no aberration, only the diffraction introduced by the finite size of the aperture). What you can see is that after the 1.0/(Lambda F) cut-off (500mm^-1 in that particular graph), everything will be equal to 0, thus the optical system is not transmitting any spatial frequencies larger than 500mm^-1.

                This is not just "a couple of points", this is a hard physical

                • I think I found the figure you you're referring to. It's this one [wikimedia.org], right? I don't think that figure lets you distinguish small from zero due to its very poor dynamic range. A logarithic second axis would be much more informative.

                  Here is an example of an MTF [folk.uio.no] from an experiment I've worked on. It looks quite similar to the figure on the Wikipedia page, and by eye one might think that's it's reached zero by 18000 or so. But consider the logarithmic version of the same graph [folk.uio.no]. As you can see, the graph had only

                  • by Arkh89 ( 2870391 )

                    Ok, let's go to the Maths then : the OTF gives you how spatial frequencies are transferred through your optical system. You understand that it is equivalent to an auto-correlation of the aperture, right?
                    Well, if the aperture is circular (a disc function, for a perfect system), the auto-correlation is equal to the area of the intersection between two shifted discs (of equal radius). This shift represents the spatial frequency : at 0 spatial frequency (the DC component), the discs are aligning perfectly and y

                    • The thing that's the self-convolution of the pupil function is the point spread function (g(theta) in my example from a few posts back). For the case of an ideal, top-hat shaped pupil function, the point spread function will fall to zero, and stay zero, at theta = 2*theta_pupil. But the optical transfer function (G(k) in my post) is the fourier transform of the point spread function. And the fourier transform of a function with limited support has unlimited support in fourier space [wikipedia.org]. Hence, while the optical

                    • by Arkh89 ( 2870391 )

                      The thing that's the self-convolution of the pupil function is the point spread function (g(theta) in my example from a few posts back).

                      Wrong. The Point Spread Function (in the case of incoherent imaging) is the magnitude squared of the Fourier Transform of the pupil function.

                      You can read : Born & Wolf, Principle of Optics, Chapter 8, Section 5 : "Fraunhofer Diffraction at apertures of various forms" (see here, starting p436 [google.com]).

                      For the case of an ideal, top-hat shaped pupil function, the point spread function will fall to zero, and stay zero, at theta = 2*theta_pupil.

                      Wrong. In that case, the PSF will be similar to the Airy pattern [wikipedia.org] (circular aperture) or a Cardinal Sine function (square/rectangular aperture). Both have unlimited support (see previous book reference).

                      And the Fourier transform of a function with limited support has unlimited support in Fourier space [wikipedia.org].

                      Right. And t

                    • Thanks for the book reference (though the missing pages were very annoying). You have convinced me. I guess it's good my job isn't designing telescope optics.

                      To answer your question about the telescope: It's f-number is 2.5 at Gregorian focus, and we observe at 148 GHz. The optical transfer function plots I showed had multipole number on the horizontal axis.

                    • by Arkh89 ( 2870391 )

                      No problem. I know how difficult it is to "bridge" the multiple scientific domains required for today's projects.

      • by Raenex ( 947668 )

        The fundamental limit to resolution is signal-to-nose.

        Otherwise known as the Pinocchio ratio.

    • I'm not an expert on optics, but from what I understand aperture size determines two things: the number of photons collected, and the angular resolution. And provided you've got plenty of photons to work with it's really only the second that matters. The implication is then that if you want more detail you need a larger aperture. That's fine for handheld telescopes, but once your aperture starts being multiple meters across you start introducing serious practical (including financial) difficulties, both

  • Just wait until we deal with the giant starfish, sentient lobsters and mirror girls.

  • I remember reading somewhere (and I've spent nearly two minutes searching on Google, so you know it's somewhat obscure) that some people were concerned that our images of distant galaxies were TOO CLEAR. Their reasoning was that any given photon will take a (likely highly biased) random walk through quantum foam, or that the clarity actually helps disprove quantum foam theories (some information here: http://www.scientificamerican.com/article/is-space-digital/).

    I realize I'm light on details, and that's du

  • by interkin3tic ( 1469267 ) on Monday April 07, 2014 @10:27AM (#46684125)
    I must have been actually working last month because I haven't heard about the entanglement-enhanced microscope. Does it do fluorescence microscopy? The was no mention of fluorescence in the article. It sounds like this is just better DIC imaging, which is of limited use in biology. Electron microscopy has (literally) been around since before the internet and has better resolution than anything you're going to get with light. Light microscopy seems to be primarily important today for basic stuff like whole tissue imaging (generally not requiring the resolution described here) or fluorescence microscopy, which it doesn't sound like this microscope can do. Fluorescence is useful because with most applications, you're trying to visualize a small thing in a much much much bigger volume of stuff. Like you're trying to see a protein within a cell within a tissue. Looking at cells with light for a small thing doesn't tell you much, you just see a blur. When the small thing is basically emitting it's own light, as happens sorta with fluorescence, you can see it.

    There are also already fluorescence based microscopy techniques which surpass the diffraction limit. [wikipedia.org]

    I'm not going to say it doesn't sound useful, since most of the time, you only realize how useful a thing is once you already have it. I'll just say that if the microscope mentioned here doesn't do fluorescence, I can't think of anything one would use it for that they wouldn't be able to do better with EM or fluorescence microscopy.
    • There is a big push now to create ultra-cheap microscopes for the developing world. This might be related to that. If you could get a powerful microscope that was the size of a postage stamp, that might help a lot of people. With this technique you could not only improve resolution, you could instead keep the same resolution and shrink the microscope by 25%

    • by czert ( 3156611 )
      Well, for one thing, electron microscopy is a destructive process, so if you actually want your sample back, you will likely use light microscopy. Also, electron microscopy cannot capture a moving thing (a living cell, for example), as it takes a lot of time and goes "pixel by pixel", instead of capturing a whole frame at once. I'm sure there are a lot of other reasons why light microscopy could be far more practical.
  • ...entangled photons, use one to illuminate the target and...

    Taken at face value, that would be sending photons to nebulas millions of light-years away, and then waiting another million+ years for them to bounce off the target and arrive back here.

    At least the cockroaches will have these great space-themed calenders.

    • ...entangled photons, use one to illuminate the target and...

      Taken at face value, that would be sending photons to nebulas millions of light-years away, and then waiting another million+ years for them to bounce off the target and arrive back here.

      At least the cockroaches will have these great space-themed calenders.

      Maybe if there was a way to bounce photons off of the second half of the summary you'd have been able to read it.

  • ..... it was called Tom Swift and His Megascope Space Projector --- never thought it would actually work, though?
  • I promise that How to Build a Quantum Telescope will go on my bookshelf right next to How to Stuff a Wild Bikini 'cause I'm soooo serious......
  • I thought most modern telescopes used mirrors instead of lenses to avoid diffraction. Well, maybe there are still some lenses left in the system, or maybe we can switch back to lenses if this works. Also, the research is interesting anyway.
    • No, modern telescopes use mirrors instead of lenses for two reasons:

      1) Once a lens gets more than about a meter across, it starts deforming measurably under its own weight (and the direction and amount of deformation changes as you shift the telescope). A mirror can be supported across its entire width and does not have this problem.

      2) A lens experiences chromatic aberration, causing different frequencies of light to focus at different points. You can reduce (but not eliminate) this by using achromatic do

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...