Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Space Science

Sharpest Images With "Lucky" Telescope 165

igny writes "Astronomers from the University of Cambridge and Caltech have developed a new camera that gives much more detailed pictures of stars and nebulae than even the Hubble Space Telescope, and does it from the ground. A new technique called 'Lucky imaging' has been used to diminish atmospheric noise in the visible range, creating the most detailed pictures of the sky in history."
This discussion has been archived. No new comments can be posted.

Sharpest Images With "Lucky" Telescope

Comments Filter:
  • by kebes ( 861706 ) on Monday September 03, 2007 @09:23PM (#20458409) Journal
    One of the main limitations to ground-based optical telescopes (and one of the reasons that Hubble gets such amazing images) is that the atmosphere generates considerable distortion. Random fluctuations in the atmosphere cause images to be blurry (and cause stars to twinkle, of course). The technique they present appears to be taking images at very high-speed. They developed an algorithm that looks through the images, and identifies the ones that happen to not-blurry (hence "lucky"). By combining all the least blurry images (taken when the atmosphere just happened to be not introducing distortion), they can obtain clear images using ground-based telescopes (which are bigger than Hubble, obviously). I imagine the algorithm they've implemented tries to use sub-sections of images that are clear, to get as much data as possible.

    Overall, a fairly clever technique. I wonder how this compares to adaptive optics [wikipedia.org], which is another solution to this problem. In adaptive optics, a guide laser beam is used to illuminate the atmosphere above the telescope. The measured distortion of the laser beam is used to distort the imaging mirror in the telescope (usually the mirror is segmented into a bunch of small independent sub-mirrors). The end result is that adaptive optics can essentially counter-act the atmospheric distortion, delivering crisp images from ground telescopes.

    I would guess that adaptive optics produces better images (partly because it "keeps" all incident light, by refocusing it properly, rather than letting a large percentage of image acquisitions be "blurry" and eventually thrown away), but adaptive optics are no doubt expensive. The technique presented in TFA seems simple enough that it would be added to just about any telescope, increasing image quality at a sacrifice in acquisition time.
  • by Erris ( 531066 ) on Monday September 03, 2007 @09:25PM (#20458431) Homepage Journal

    DIY [cam.ac.uk].

  • Re:Exposure Time? (Score:5, Informative)

    by gardyloo ( 512791 ) on Monday September 03, 2007 @09:27PM (#20458449)
    Add up 1000 of those frames, and you have a 50 second exposure.
  • Re:Yawn (Score:4, Informative)

    by QuickFox ( 311231 ) on Monday September 03, 2007 @09:30PM (#20458475)
    According to the second article on that page, it's the other way around:

    Images from ground-based telescopes are invariably blurred out by the atmosphere. Astronomers have tried to develop techniques to correct the blurring called adaptive optics but so far they only work successfully in the infrared where the smearing is greatly reduced.
  • Dr. Mackay? (Score:3, Informative)

    by comrade k ( 787383 ) <comradek@@@gmail...com> on Monday September 03, 2007 @09:30PM (#20458481)

    Dr Craig Mackay is happy to be contacted directly for interviews
    Man, the whole Stargate franchise has been really going down the drain since they cancelled SG-1.
  • Re:But surely... (Score:5, Informative)

    by drudd ( 43032 ) on Monday September 03, 2007 @09:33PM (#20458507)
    As the previous poster noted, there isn't any atmosphere and thus the technique isn't useful for HST.

    Additionally, while they don't mention details in the article, I presume they have a specially designed camera. This is an old technique, but it's generally limited to very bright objects due to something called readout noise. Basically all CCD's produce an additional signal due to the process of reading out the data. This limits the effectiveness of repeated short observations to sources which are much brigher than this noise, since the noise also grows linearly with the number of images taken.

    To image distant galaxies you typically have to take exposures of one to several hours, and thus this technique isn't useful.

    Doug
  • by szyzyg ( 7313 ) on Monday September 03, 2007 @09:36PM (#20458549)
    THere's several pieces of software which do som parts of this - Registax is what I use, but amateurs usually only have enough aperture to make this work for bright objects like planets. You can take a good quality webcam (the top of the line Phillips webcams are the best bang for yout buck), record some video of a planet through a telescope and then pick out the least distorted images before adding them together to create the final image. Now, the trick is getting the best measurement of which images are undistorted, and getting enough light in each frame while keeping the esposure time short enough to beat the atmosphere.

    Look at the planetary images here [imeem.com] for my attempts at this technique.
  • by Anonymous Coward on Monday September 03, 2007 @09:40PM (#20458579)
    TFA mentions that they can achieve images better than Hubble. The sample image they show [cam.ac.uk], of the Cat's Eye Nebula, isn't as sharp as the Hubble image of the same object [esa.int].

    Probably they can push their technique harder than this initial image suggests (it was mainly comparing the "lucky" image with a conventional, blurry, ground-based image)... But I just thought it would be good to show Hubble's pictures alongside.
  • Re:But surely... (Score:5, Informative)

    by hazem ( 472289 ) on Monday September 03, 2007 @09:57PM (#20458685) Journal
    Additionally, while they don't mention details in the article, I presume they have a specially designed camera.

    They are using a new kind of CCD that somehow lowers the noise floor. Details are at:
    http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/LI _Why%20Now.htm [cam.ac.uk]

    In fact this site (same basic place) is much more informative than the press release and answers a lot of questions:
    http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/in dex.htm [cam.ac.uk]
  • by Phanatic1a ( 413374 ) on Monday September 03, 2007 @10:07PM (#20458769)
    ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

    The camera works by recording the images produced by an adaptive optics front-end at high speed (20 frames per second or more). Software then checks each one to pick the sharpest ones. Many are still quite significantly smeared but a good percentage are unaffected. These are combined to produce the image that astronomers want. We call the technique "Lucky Imaging" because it depends on the chance fluctuations in the atmosphere sorting themselves out.
  • link (Score:1, Informative)

    by Anonymous Coward on Monday September 03, 2007 @10:15PM (#20458837)
    Hubble: Cat's Eye Nebula [hubblesite.org]
  • by [rvr] ( 93881 ) on Monday September 03, 2007 @10:39PM (#20459019) Homepage Journal

    This is indeed no news to amateur astronomers. This technique has been used extensively by planetary imagers in recent years to take amazing photos of Jupiter, Mars and Saturn. The basic tools are a good webcam to take AVI files and Registax to proccess the frames. Take a look to Damien Peach's best images [damianpeach.com].

    As for pro, there is even an article in Wikipedia about it: Lucky imaging [wikipedia.org]: "Lucky imaging was first used in the middle 20th century, and became popular for imaging planets in the 1950s and 1960s (using cine cameras or image intensifiers). The first numerical calculation of the probability of obtaining lucky exposures was an article by David L. Fried in 1978."

    In order to throw away many frames and retain only those of high quality, better have a bright object or a big telescope. In this case, the astronomers had been able to image a faint nebula.

  • by ackthpt ( 218170 ) on Monday September 03, 2007 @10:55PM (#20459139) Homepage Journal

    using 'Blue Peter' technology

    Blue Peter [bbc.co.uk] is a BBC childrens show. Blue Peter Technology is effectively something so simple a child could do it.

  • Re:Yawn (Score:4, Informative)

    by Cecil ( 37810 ) on Monday September 03, 2007 @11:18PM (#20459307) Homepage
    Actually, near infrared is not blocked by water vapor, in fact water vapor is extremely transparent to near infrared light even moreso than visible light. That's why satellites can use infrared to see through clouds, and also why adaptive optics work so well in the near infrared range.

    Far infrared is a different story, and you're absolutely correct there.
  • by edremy ( 36408 ) on Monday September 03, 2007 @11:22PM (#20459339) Journal
    As many have pointed out, there are a whole pile of applications that do the same thing for amateur telescopes. I've taken my Dad's 40-year-old 6" Dynascope, fixed up the motor drive, bought a $60 webcam (Philips SPC900), adapter and UV filter and gotten some quite nice photos of the Sun, the Moon, Jupiter and Saturn by capturing a few thousand frames and running them through Registax. (I'm working on Mars and Uranus- a whole lot harder with a small scope from a suburban backyard.)

    I'm curious though about how they deal with some of the "features" you get to see with this technique. It's *very* easy to stack a few hundred images, run Registax's sharpening filter and get some interesting pictures of stuff that doesn't really exist. I'm not sure I really trust the fine detail in my photos- unless I see it in another taken a few hours later it may well not be real.

  • Re:Yawn (Score:4, Informative)

    by 0123456789 ( 467085 ) on Tuesday September 04, 2007 @01:12AM (#20460285)
    Adaptive optics works so well in the IR due to the wavelength dependence of the Fried parameter, r0, and hence Kolmogorov turbulence. There's less turbulence in the IR, hence it's easier to correct it.


    See here [noao.edu], for example, for more information.


    There are wavelength ranges in the NIR where the atmosphere is indeed transparent (J,H and K bands, for example); but the atmosphere is opaque at most NIR wavelengths (and, even at those IR wavelengths where the atmosphere is transparent, the transmittance is lower than at visible or radio wavelengths). See here [caltech.edu] for more info.

  • by edunbar93 ( 141167 ) on Tuesday September 04, 2007 @02:30AM (#20460731)
    ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

    No, they propose that it be used together with adaptive optics. The research that was done to produce this press release was actually done at the Mount Palomar observatory, which was completed in 1947 [caltech.edu] and most certainly does not feature adaptive optics.

    From the article:

    The technique could now be used to improve much larger telescopes such as those at the European Southern Observatory in Chile, or the Keck telescopes in the top of Mauna Kea in Hawaii. This has the potential to produce even sharper images.

    (Emphasis mine)
  • by XNormal ( 8617 ) on Tuesday September 04, 2007 @02:34AM (#20460775) Homepage
    Even if this technique can eventually produce better pictures at lower cost it is still limited to wavelengths that can penetrate the atmosphere. Some of the most exciting recent discoveries are in infrared (Spitzer) and X-ray (Chandra). The next big telescipe (James Webb Space Telescope) is also for infrared.
  • Re:But surely... (Score:3, Informative)

    by andersa ( 687550 ) on Tuesday September 04, 2007 @05:03AM (#20461537)
    To sum up, the problem is readout noise. The faster you read out the CCD, the more noise you get. When you image a faint object the readout noise exceeds the signal level. The reason amature astronomers can use this technique anyway is because they are imaging bright objects (like planets), so the signal is easily discernable from the readout noise.

    Now there is a new type of CCD with a built in digital signal multiplier that precedes the readout step in each individual pixel. You can simply select an appropriate multiplier that gives pixel values that fall nicely in the middle of the register width and when you read out the value, any noise can simply be subtracted away because you know that it will be much less than the signal value you are looking at.
  • by Anonymous Coward on Tuesday September 04, 2007 @05:22AM (#20461615)

    Fixing your knowledge:
    - The Earth rotates in 24h
    - Hubble orbits in 90 min
    so Hubble cannot peer "hours at a time", but ground telescopes can.
    Hubble can actually produce millon-second-long exposures [hubblesite.org]. That's 400 orbits, but stacking 21 minute exposures.
  • Re:Lucky Imaging (Score:5, Informative)

    by theckhd ( 953212 ) on Tuesday September 04, 2007 @08:10AM (#20462627)
    From this paper [arxiv.org], which is linked to in the Wikipedia article:

    The frame selection algorithm, implemented (currently) as
    a post-processing step, is summarised below:
    1. A Point Spread Function (PSF) guide star is selected as a
    reference to the turbulence induced blurring of each frame.
    2. The guide star image in each frame is sinc-resampled by a
    factor of 4 to give a sub-pixel estimate of the position of the
    brightest speckle.
    3. A quality factor (currently the fraction of light concentrated
    in the brightest pixel of the PSF) is calculated for each
    frame.
    4. A fraction of the frames are then selected according to their
    quality factors. The fraction is chosen to optimise the tradeoff
    between the resolution and the target signal-to-noise ratio
    required.
    5. The selected frames are shifted-and-added to align their
    brightest speckle positions.
    (bolding mine)

    So it looks like each frame is shifted as a whole rather than each individual pixel. Which makes sense from the description of the process, since the theory is that the images you're picking in the Lucky Imaging technique are high-quality images with a random offset due to the atmosphere.
  • by theckhd ( 953212 ) on Tuesday September 04, 2007 @08:25AM (#20462757)
    I think your suspicions are probably correct.

    Lucky Imaging relies on the fact that every so often, a really high-quality image makes it through the atmosphere almost unperturbed (based on the Kolmogorov model [cam.ac.uk] of turbulence). While I don't know whether the same model can be applied to cosmic gas clouds, there may be another model that could accurately model the phase distortions those clouds impress upon a wavefront.

    To achieve this one must take many very short-exposure (compared to the time-scale of atmospheric turbulence, or gas-cloud turbulence in the case we're considering) images of the source. However, distant (or dim) objects often require reasonably long exposure times in order to collect a large enough amount of light to be able to see the image. The problem with this technique may simply be that the exposure time necessary for the Lucky Image algorithm to work is too short to actually collect enough light to create a good image in the first place.
  • by tfield98 ( 781874 ) on Tuesday September 04, 2007 @08:26AM (#20462767)
    I emailed the principle researcher on this project, asking him what was novel about his approach, since amateurs have been "stacking" images for years. Below is his response: From: Craig Mackay [mailto:cdm@ast.cam.ac.uk] Sent: Tuesday, September 04, 2007 5:20 AM Subject: Re: What's new with Lucky? Dear Tom Thank you for your message. What is new about this (and gets rather lost with the media coverage) is being able to use lucky imaging on a much larger telescope. With a 2.5 meter telescope we are able to use typically 10% of the images. With a five meter telescope and four times the area we would be able to use only 0.01% of the images, a completely useless fraction! For the first time however we have managed to do it by using an adaptive optic system in front of our lucky imaging camera. That is what is new and that is what has made all the difference. The AO system gets rid of the larger scale low order turbulent distortions leaving lucky imaging to work on the higher frequency ones which it does rather well. Hence the new image quality which is twice as good in terms of resolution as Hubble, something that has never been achieved before either from space or from ground. If you look on the lucky website you will find a lot of information about amateur lucky imaging for which I have a very high regard. Best wishes Craig Mackay.
  • by Dr. Zowie ( 109983 ) <slashdotNO@SPAMdeforest.org> on Tuesday September 04, 2007 @11:33AM (#20464745)
    It appears that they simply picked a bad demo image. The Caltech site has a much more compelling sample at http://www.astro.caltech.edu/~nlaw/lamp_pics/ [caltech.edu].
  • Re:Lucky Imaging (Score:3, Informative)

    by kindbud ( 90044 ) on Tuesday September 04, 2007 @12:53PM (#20465893) Homepage
    The amateur stacking program Registax seems to be more sophisticated than this. It allows multiple alignment stars or points and shifts the corresponding subregions of the image as needed. Otherwise, the method for selecting the images is very similar.

    http://www.astronomie.be/registax/html/multi_opera tion_1_.html [astronomie.be]

  • by Rei ( 128717 ) on Tuesday September 04, 2007 @07:01PM (#20471553) Homepage
    You laugh, but there's a legitimate method for seeing fainter objects with the naked eye through a telescope involving looking to the side of it, since your peripheral vision is more sensitive to faint objects. It can be tricky to pull off, though, because you instinctively want to look at what you're trying to perceive.

    Not quite squinting, but still an eye trick ;)

There are two ways to write error-free programs; only the third one works.

Working...