Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Space Science

Sharpest Images With "Lucky" Telescope 165

igny writes "Astronomers from the University of Cambridge and Caltech have developed a new camera that gives much more detailed pictures of stars and nebulae than even the Hubble Space Telescope, and does it from the ground. A new technique called 'Lucky imaging' has been used to diminish atmospheric noise in the visible range, creating the most detailed pictures of the sky in history."
This discussion has been archived. No new comments can be posted.

Sharpest Images With "Lucky" Telescope

Comments Filter:
  • Lucky Imaging (Score:5, Insightful)

    by dlawson ( 209945 ) on Monday September 03, 2007 @09:15PM (#20458319)
    First post, huh.
    This technique is often used by amateur astrophotographers using newer CCD cameras and even webcams. Astronomy Picture Of the Day http://antwrp.gsfc.nasa.gov/apod/astropix.html [nasa.gov] is a great site to see this stuff. I haven't checked Googles pictures, but I am sure that there would be a number of them there, too.
    The quality of some of these photos is amazing.
    davel
    • by ackthpt ( 218170 )

      CCD cameras need not all cost £££ or $$$. I'm in the midst of converting a Philips SPC900NC to an astro imaging camera. Alas, I don't think I'll finish in time for a trip with the scope to high elevation next weekend.

      • by Rei ( 128717 )
        What are you using to do the conversion? I tried using my DSC-H2 for afocal astrophotography, but due to its fast lens, I had really severe vignetting (to the degree of the pictures being nearly useless). I really need eyepiece projection or prime focal, but for that, I need something that I can remove the lens on (i.e., a DSLR or CCD). I've also heard good things about using quality webcams for astrophotography (esp. for lunar & planetary).

        Are you familiar with any shift-and-add or automated lucky i
    • Re:Lucky Imaging (Score:5, Interesting)

      by Anonymous Coward on Monday September 03, 2007 @11:47PM (#20459537)
      Apologies for not having an account - but I would really like to ask a question for someone who understands the process.

      the wikipedia entry on this subject http://en.wikipedia.org/wiki/Lucky_imaging [wikipedia.org] states that new procedures take, '... advantage of the fact that the atmosphere does not "blur" astronomical images, but generally produces multiple sharp copies of the image'.

      Does the correction algorithm apply a single vector to each image (ie the entire frame is shifted in unity) to produce the composite, or is a vector field applied to every pixel point in the image to shift individually the pixels toward their correct centres? Also if it is pointwise what type of transform is being applied, affine , perspective etc.

         
      • Please mod parent up so that someone knowledgeable can answer the question - I don't know the answer myself, but would love to, and the Anonymous Coward score of zero means many people may miss this great question.
      • Re:Lucky Imaging (Score:5, Informative)

        by theckhd ( 953212 ) on Tuesday September 04, 2007 @08:10AM (#20462627)
        From this paper [arxiv.org], which is linked to in the Wikipedia article:

        The frame selection algorithm, implemented (currently) as
        a post-processing step, is summarised below:
        1. A Point Spread Function (PSF) guide star is selected as a
        reference to the turbulence induced blurring of each frame.
        2. The guide star image in each frame is sinc-resampled by a
        factor of 4 to give a sub-pixel estimate of the position of the
        brightest speckle.
        3. A quality factor (currently the fraction of light concentrated
        in the brightest pixel of the PSF) is calculated for each
        frame.
        4. A fraction of the frames are then selected according to their
        quality factors. The fraction is chosen to optimise the tradeoff
        between the resolution and the target signal-to-noise ratio
        required.
        5. The selected frames are shifted-and-added to align their
        brightest speckle positions.
        (bolding mine)

        So it looks like each frame is shifted as a whole rather than each individual pixel. Which makes sense from the description of the process, since the theory is that the images you're picking in the Lucky Imaging technique are high-quality images with a random offset due to the atmosphere.
  • by Ant P. ( 974313 ) on Monday September 03, 2007 @09:15PM (#20458327)
    ...can the same be applied in space telescopes to get rid of the interference of the gas clouds they're looking at?
    • by Anonymous Coward on Monday September 03, 2007 @09:21PM (#20458387)
      Well sure. All you have to do is bounce your laser off of those gas clouds to find out how to compensate for them. That should only take a couple hundred or a couple of thousand years with a laser that would consume more power than all of the Earth uses. Oh, and you better hope that that gas cloud doesn't change in the transit time.
      • Re: (Score:3, Insightful)

        by vtcodger ( 957785 )
        ***Well sure. All you have to do is bounce your laser off of those gas clouds to find out how to compensate for them.***

        That's adaptive optics. 'Lucky imaging' looks to be something different. Sounds like Lucky Imaging tries to catch and merge portions of the image that occasionally, by chance, make it through the ever changing atmosphere with minimal distortion.

        But I think that the answer to the original question is probably still 'No" It doesn't sound like Lucky Imaging per se is an answer to the q

        • Re: (Score:3, Informative)

          by theckhd ( 953212 )
          I think your suspicions are probably correct.

          Lucky Imaging relies on the fact that every so often, a really high-quality image makes it through the atmosphere almost unperturbed (based on the Kolmogorov model [cam.ac.uk] of turbulence). While I don't know whether the same model can be applied to cosmic gas clouds, there may be another model that could accurately model the phase distortions those clouds impress upon a wavefront.

          To achieve this one must take many very short-exposure (compared to the time-scale of atmosp
    • by Aesir1984 ( 1120417 ) on Monday September 03, 2007 @09:31PM (#20458485)
      The distortion they are trying to get rid of is caused by motion of the air in the atmosphere. It's similar to the waves and blurring you see looking across a parking lot on a hot day. They put space telescopes out of the atmosphere to get above these effects. The objects they're looking at don't have this problem because the thing being imaged is what is giving off the light, it's not something between the source and the viewer like the atmosphere is and so does not cause diffraction to the same extent. I would expect that this technique works rather well for bright objects, however I wouldn't expect it to work well for the very dim objects that the Hubble is normally tasked to look at. It order for them to use this technique they have to take many images per second. For very dim objects this would only mean a few photons per picture, not nearly enough to figure out if this image is any sharper than any other. So we won't be able to get rid of space telescopes or adaptive optics just yet.
    • No, and this is why. (Score:3, Interesting)

      by edunbar93 ( 141167 )
      Interstellar gas clouds are pretty static. You would have to take one image every, say, year or maybe 100 years to really get any difference in the image quality. Whereas the earth's atmosphere produces an effect almost exactly the same as if you were to look at the bottom of a swimming pool, and in about the same timeframe.

      No, the images we get right now from space telescopes are the best we can get at any given epoch, and that's just the way it is.
    • by TMB ( 70166 )
      Not exactly, but there are some techniques in the radio that are kind of similar. The problem is that the timescale for gas clouds between us and a given object to change configuration is usually longer than an observation. But in some cases you can look at how fast you see certain kinds of fluctuations that are due to intervening gas clouds and infer the size of the object, even when that object is too small to have been resolved.

      [TMB]
  • Amateur Lucky Imaging is popular because the technique is so cheap and effective. The low cost means that we could apply the process to telescopes all over the world."

    can't they use the same techniques with the HST itself?
    • Re:But surely... (Score:5, Insightful)

      by ScrewMaster ( 602015 ) on Monday September 03, 2007 @09:20PM (#20458373)
      I'm just blowing smoke here, but it seems to me that a technique designed to compensate for atmospheric distortion might not be all that useful when there's no atmospheric.
      • Re: (Score:2, Interesting)

        by click2005 ( 921437 )
        The technique takes the clearer portions from many images and merges them. The article says that some portions are less smeared than others but doesn't say if the atmoisphere was also magnifying the target or not. I know astronomers have used gravity from intervening distant objects to magnify other distant objects so couldn't a similar technique be used there?
    • Re:But surely... (Score:5, Informative)

      by drudd ( 43032 ) on Monday September 03, 2007 @09:33PM (#20458507)
      As the previous poster noted, there isn't any atmosphere and thus the technique isn't useful for HST.

      Additionally, while they don't mention details in the article, I presume they have a specially designed camera. This is an old technique, but it's generally limited to very bright objects due to something called readout noise. Basically all CCD's produce an additional signal due to the process of reading out the data. This limits the effectiveness of repeated short observations to sources which are much brigher than this noise, since the noise also grows linearly with the number of images taken.

      To image distant galaxies you typically have to take exposures of one to several hours, and thus this technique isn't useful.

      Doug
      • Re:But surely... (Score:5, Informative)

        by hazem ( 472289 ) on Monday September 03, 2007 @09:57PM (#20458685) Journal
        Additionally, while they don't mention details in the article, I presume they have a specially designed camera.

        They are using a new kind of CCD that somehow lowers the noise floor. Details are at:
        http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/LI _Why%20Now.htm [cam.ac.uk]

        In fact this site (same basic place) is much more informative than the press release and answers a lot of questions:
        http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/in dex.htm [cam.ac.uk]
        • Re: (Score:3, Informative)

          by andersa ( 687550 )
          To sum up, the problem is readout noise. The faster you read out the CCD, the more noise you get. When you image a faint object the readout noise exceeds the signal level. The reason amature astronomers can use this technique anyway is because they are imaging bright objects (like planets), so the signal is easily discernable from the readout noise.

          Now there is a new type of CCD with a built in digital signal multiplier that precedes the readout step in each individual pixel. You can simply select an approp
          • Just wanted to add that amateur astronomers can image Deep Space Objects (DSOs) using modified (for long exposure) web cams as well as Lunar/Planatary Imaging (LPI) using unmodified web cams.

            Either one is enhanced by 'stacking' images and processing a bunch of then. This is because the S/N ratio improves with additional images added to the stack.

            The Signal increases by the square of the number of images, while the Noise increases by the sum of the number of images - so the Signal increases faster than the N
    • Possibly, if the HST was as large and powerful as the land based telescopes

      "The images space telescopes produce are of extremely high quality but they are limited to the size of the telescope," Dr Mackay added. "Our techniques can do very well when the telescope is bigger than Hubble and has intrinsically better resolution."
      • by CalSolt ( 999365 )
        Well shit, I'm waiting for the array of 1,000 foot telescopes on the moon. THEN we'll be doing some serious astronomy.
  • Exposure Time? (Score:3, Insightful)

    by MonorailCat ( 1104823 ) on Monday September 03, 2007 @09:21PM (#20458385)
    TFA states that the camera takes 20 frames per second. Aren't most exposures of deep space objects on the order of seconds or minutes (or longer). Seems like 1/20th of a sec wouldn't cut it for all but the brightest objects.
    • Re:Exposure Time? (Score:5, Informative)

      by gardyloo ( 512791 ) on Monday September 03, 2007 @09:27PM (#20458449)
      Add up 1000 of those frames, and you have a 50 second exposure.
      • No, you don't. There is a certain threshold for signals to be recognised above noise in an image sensor. If the signals you're trying to detect are so weak, many rapid samples will only give you more noise
        • Signal-to-noise should go down as roughly 1/N^(1/2), where N here is your number of exposures (as long as your signal isn't changing between frames). More exposures means a better signal-to-shit ratio.

          Now, if you're basing a real signal as being above some threshold, and noise as being below, then you need only one exposure, if the signal is present in that exposure. Otherwise, just keep snapping frames. No different than exposing film or a long-exposure sensor for a longer time.
      • Well, yes, but then why don't you expose 50 seconds in the first place? If you simply add up the images you'll suffer the same distortions that prevented good images in the first place. Unless you do some math tricks before summing them up you gain nothing at all. But to do that the single frames must have a clearly visible signal to work with, hence you either need a very sensitive camera or limit yourself to the brighter objects up there.
        • Because you can throw away a few of 'em for various reasons, and you can register them electronically, which takes some burden off of the telescope mount.
        • If you simply add up the images you'll suffer the same distortions that prevented good images in the first place.

          Expose for 60 min, giving (at 20 exposures per second) 72000 exposures.

          Pick only the best 1000 exposures, keeping only the best, sharpest, clearest 1.4%, or getting rid of the worst 98.6%

          Run them through your imaging algorythm.

          You now have a 50 second exposure without all the blurring, distortion, and general cruft you threw away with the 98% (of the total) crap exposures you got rid of.

          ...the
          • The procedure you described only works under a certain premise: if you make thousands of images you can only take those that show the same type of blurring (or assure the distortions don't change during the time of exposure) so adding them will actually improve the image quality . When the signal is hidden in the noise and not discernible (and I don't mean necessarily by the naked eye) you have no way of telling which images are the best, and simple adding up won't help. A sensitive camera is therefore a "m
    • Seems like 1/20th of a sec wouldn't cut it for all but the brightest objects.

      One of the short texts below the two initial articles says that it's a new camera capable of detecting individual photons:

      This new camera chip is so sensitive that it can detect individual particles of light called photons even when running at high speed. It is this extraordinary sensitivity that makes these detectors so attractive for astronomers.

      Unfortunately it doesn't give any details on how much light is needed compared to other techniques.

  • by kebes ( 861706 ) on Monday September 03, 2007 @09:23PM (#20458409) Journal
    One of the main limitations to ground-based optical telescopes (and one of the reasons that Hubble gets such amazing images) is that the atmosphere generates considerable distortion. Random fluctuations in the atmosphere cause images to be blurry (and cause stars to twinkle, of course). The technique they present appears to be taking images at very high-speed. They developed an algorithm that looks through the images, and identifies the ones that happen to not-blurry (hence "lucky"). By combining all the least blurry images (taken when the atmosphere just happened to be not introducing distortion), they can obtain clear images using ground-based telescopes (which are bigger than Hubble, obviously). I imagine the algorithm they've implemented tries to use sub-sections of images that are clear, to get as much data as possible.

    Overall, a fairly clever technique. I wonder how this compares to adaptive optics [wikipedia.org], which is another solution to this problem. In adaptive optics, a guide laser beam is used to illuminate the atmosphere above the telescope. The measured distortion of the laser beam is used to distort the imaging mirror in the telescope (usually the mirror is segmented into a bunch of small independent sub-mirrors). The end result is that adaptive optics can essentially counter-act the atmospheric distortion, delivering crisp images from ground telescopes.

    I would guess that adaptive optics produces better images (partly because it "keeps" all incident light, by refocusing it properly, rather than letting a large percentage of image acquisitions be "blurry" and eventually thrown away), but adaptive optics are no doubt expensive. The technique presented in TFA seems simple enough that it would be added to just about any telescope, increasing image quality at a sacrifice in acquisition time.
    • Both are employed pretty heavily by advanced "Amateur" astronomers. I put amateur in quotes because people at the high end of the hobby may have setups costing $50,000-$100,000+ dollars, going up to as much as people are willing to spend. There are several companies (http://www.sbig.com/ [sbig.com] for example) that specialize in producing imaging equipment and software for these setups. It's pretty amazing what these people are able to do.
      • by john83 ( 923470 )

        Both are employed pretty heavily by advanced "Amateur" astronomers. I put amateur in quotes because people at the high end of the hobby may have setups costing $50,000-$100,000+ dollars, going up to as much as people are willing to spend. There are several companies (http://www.sbig.com/ [sbig.com] for example) that specialize in producing imaging equipment and software for these setups. It's pretty amazing what these people are able to do.

        I attended a lecture a year or two ago by a respected academic in adaptive optics (Chris Dainty [www.ucg.ie], for the curious). He described efforts to put together an AO kit for amateur astronomers. I think he said that he wasn't able to get it under a few thousand Euro. It's not a cheap science, for sure.

    • by Phanatic1a ( 413374 ) on Monday September 03, 2007 @10:07PM (#20458769)
      ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

      The camera works by recording the images produced by an adaptive optics front-end at high speed (20 frames per second or more). Software then checks each one to pick the sharpest ones. Many are still quite significantly smeared but a good percentage are unaffected. These are combined to produce the image that astronomers want. We call the technique "Lucky Imaging" because it depends on the chance fluctuations in the atmosphere sorting themselves out.
      • Re: (Score:3, Informative)

        by edunbar93 ( 141167 )
        ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

        No, they propose that it be used together with adaptive optics. The research that was done to produce this press release was actually done at the Mount Palomar observatory, which was completed in 1947 [caltech.edu] and most certainly does not feature adaptive optics.

        From the article:

        The technique could now be used to improve much larger telescopes such as those at the European Southern Observatory in Chile, or the Keck teles
    • by jstott ( 212041 )

      Overall, a fairly clever technique. I wonder how this compares to adaptive optics, which is another solution to this problem.

      The two techniques are unrelated; either one or both at the same time can be used to improve the images. Actually, the sample images from the original article were taken through a telescope (Palomar) using basic adaptive optics to improve the image before the "lucky" software even saw the data.

      As you suggest, this also works with sub-sections of the image. I saw this same tech

    • by TheMCP ( 121589 )
      Adaptive optics does not require a guide laser. The system often works by identifying an object in the image which is essentially small: a far away star that will register as more or less a point source, for example. It then uses that as its guide, and distorts the mirror to minimize the image of that object: essentially, to focus it. If that object is "focused", then objects near it generally are too.

      Using this methodology, a large ground based telescope can easily achieve better imaging than the Hubble, a
  • by Erris ( 531066 ) on Monday September 03, 2007 @09:25PM (#20458431) Homepage Journal

    DIY [cam.ac.uk].

    • Re: (Score:2, Informative)

      by [rvr] ( 93881 )

      This is indeed no news to amateur astronomers. This technique has been used extensively by planetary imagers in recent years to take amazing photos of Jupiter, Mars and Saturn. The basic tools are a good webcam to take AVI files and Registax to proccess the frames. Take a look to Damien Peach's best images [damianpeach.com].

      As for pro, there is even an article in Wikipedia about it: Lucky imaging [wikipedia.org]: "Lucky imaging was first used in the middle 20th century, and became popular for imaging planets in the 1950s and 1960s (using c

    • by Trogre ( 513942 )
      Cool. Anyone know if there's a GIMP plugin for this sort of thing?

  • Spider-sense (Score:5, Interesting)

    by BitwizeGHC ( 145393 ) on Monday September 03, 2007 @09:26PM (#20458447) Homepage
    That is really quite amazing, and reminds me a bit of the jumping spiders whose retinas vibrate to increase their optic resolution.
    • You do it too. See, Saccades [wikipedia.org].
      • by jafuser ( 112236 )
        I wonder if this may be one of the contributing factors to the uncanny valley [wikipedia.org] effect in 3D animation. One complaint is that the faces, and especially the eyes look "dead". Perhaps the 3D studios should hire some anime artists, who sometimes greatly exaggerate the saccade behavior.
        • You know, by now I wonder if the uncany valley effect actually exists at all. Remember, it's just a hypothesis.

          The thing is, if you carefully cherry-pick your examples, and/or are allowed to hand-wave where any given example should fall, you can convincingly argue the uncanny valley effect. But the problem is when you anchor two examples which should, for example be in the valley, yet a third in the middle is not. Although by the shape of it, the third should be there too.

          For example, the FF movies were sup
  • Since it's running through a computer algorithm and piecing many together, and isn't just a single "lucky" picture, I wonder how much error is introduced by the algorithm. I mean sure, an algorithm like this might work well most of the time, but what happens when it produces an image that looks clear, but isn't accurate.
    • Assuming the noise is random, and the object isn't, then these should be pretty close to what the same picture taken with the same optical equipment in a vacuum would produce with a slight bias towards the center of the noise. That is, if your noise is evenly distributed between 0.0 and 5.0, then averaging a few hundred slices through time would result in a fixed noise of 2.5 across the board. Which is the same thing as no noise.

      There are things this wouldn't be useful for. Mainly anything that might be cha
    • The principal is that by taking lots of pictures of the same thing, you can correct the error. The larger the sample you take, the closer you get to the true image. For error to be amplified you would almost need the same random dust particle arrangement from the telescope to the edge of the atmosphere in a significant sample of the images, which is very unlikely.

      Of course you probably understand that.

      what happens when it produces an image that looks clear, but isn't accurate.

      In answer to your actual qu

  • Dr. Mackay? (Score:3, Informative)

    by comrade k ( 787383 ) <comradekNO@SPAMgmail.com> on Monday September 03, 2007 @09:30PM (#20458481)

    Dr Craig Mackay is happy to be contacted directly for interviews
    Man, the whole Stargate franchise has been really going down the drain since they cancelled SG-1.
  • by szyzyg ( 7313 ) on Monday September 03, 2007 @09:36PM (#20458549)
    THere's several pieces of software which do som parts of this - Registax is what I use, but amateurs usually only have enough aperture to make this work for bright objects like planets. You can take a good quality webcam (the top of the line Phillips webcams are the best bang for yout buck), record some video of a planet through a telescope and then pick out the least distorted images before adding them together to create the final image. Now, the trick is getting the best measurement of which images are undistorted, and getting enough light in each frame while keeping the esposure time short enough to beat the atmosphere.

    Look at the planetary images here [imeem.com] for my attempts at this technique.
    • The difference is the resolution of the camera. The Phillips Toucam can produce movies at 640x480, whereas I would expect that the cameras they were using in this research produce research-grade resolutions in excess of 1280x1024, which is no small feat to get working at 30fps. Also, to make this work with a Toucam requires very bright objects and/or very large telescopes.

      However, it's worth noting that amateurs' results today are typically much better than those of professional astronomers 30 or even 20 ye
    • I emailed the principle researcher on this project, asking him what was novel about his approach, since amateurs have been "stacking" images for years. Below is his response: From: Craig Mackay [mailto:cdm@ast.cam.ac.uk] Sent: Tuesday, September 04, 2007 5:20 AM Subject: Re: What's new with Lucky? Dear Tom Thank you for your message. What is new about this (and gets rather lost with the media coverage) is being able to use lucky imaging on a much larger telescope. With a 2.5 meter telescope we are able
  • by Anonymous Coward on Monday September 03, 2007 @09:40PM (#20458579)
    TFA mentions that they can achieve images better than Hubble. The sample image they show [cam.ac.uk], of the Cat's Eye Nebula, isn't as sharp as the Hubble image of the same object [esa.int].

    Probably they can push their technique harder than this initial image suggests (it was mainly comparing the "lucky" image with a conventional, blurry, ground-based image)... But I just thought it would be good to show Hubble's pictures alongside.
    • Thank you for the Hubble version of the Cat's Eye Nebula picture.

      1. Slashdot introduction summary should say not as good as Hubble (HST). Instead mistakenly says better than Hubble.

      2. The TFA linked site should (chuckle) show Hubble pictures along with the other ground based pictures.

      God bless John Grunsfeld and the other NASA space walking astronauts who fix HST. Also the vast supporting cast for those missions.

      Thanks for the update.
      Jim
    • Re: (Score:3, Informative)

      by Dr. Zowie ( 109983 )
      It appears that they simply picked a bad demo image. The Caltech site has a much more compelling sample at http://www.astro.caltech.edu/~nlaw/lamp_pics/ [caltech.edu].
  • You would think that someone that developed a start of the art method to remove noise and distortions from atmospheric images would think twice about using a salt and pepper bitmap background.
  • Technology improves over time and it gets cheaper. The HST is 20 years old, and the technology to design and build it are even older. New inventions will come along in the next decades to make Lucky seem overpriced. But that doesn't stop people from deploying it now.
    • Unfortunately, space travel isn't subject to Moore's Law. Spaceborne stuff is going to remain much, much more expensive until space access is routine -- and even then it will remain very (rather than insanely) expensive so long as we are using chemical rockets and not reusable fusion drives or some other science fiction gizmo.
    • by ceoyoyo ( 59147 )
      Space telescopes like Hubble have this unfortunate requirement that they be launched into space. An equivalent telescope made today, or twenty years from now (barring something revolutionary) will still cost a lot more than a ground telescope. Note that most of the telescopes they're talking about are at least as old as Hubble.
  • Not convinced by TFA (Score:5, Interesting)

    by Oligonicella ( 659917 ) on Monday September 03, 2007 @09:59PM (#20458709)
    Just went and looked up the Cat's Eye Nebula as taken by the Hubble. Lot more detail. What gives? Someone able to explain that, please?
    • This is all a guess.

      The Hubble images probably resolve fainter objects but the Lucky images are sharper. Sharpness means resolution of distinct objects is better. The Hubble may see more while the Lucky sees them sharper but misses out on faint objects. The big question for me is how good the Earth-based telescope is at picking up faint images, which appears to Hubble's strength. The Hubble Telescope can peer at an object for hours at a time with an open aperture. A ground-based telescope cannot because the
  • by tjstork ( 137384 ) <todd.bandrowsky@ ... UGARom minus cat> on Monday September 03, 2007 @10:06PM (#20458759) Homepage Journal
    I would think that before the scientists claim victory over Hubble, let's see their camera best some of Hubble's best work:

    http://hubblesite.org/ [hubblesite.org]

    There's a number of excellent Hubble images of just about everything in our solar system to the most distant galaxies.

    I would put my money on Hubble, for two reasons.

    First, the averaging algorithm is not without its flaws. They make the assumption that by averaging out a bunch of images, you eliminate distortion. For this to work, you have to assume that the probability of a particular pixel being in the right spot is higher as the distortion would essentially be random, and that could theoretically not be the case. If the distortion is completely random, then, averaging a set of images would essentially lose the pixel that is being pushed around its "real" spot by the atmosphere, and you can actually see that, as the corrected images still look muddy compared to their HST or even adaptive optic counterparts.

    Secondly, the atmosphere doesn't just distort light, it also filters it. You can use averaging to remove distortion "noise", but, there's really no way to ascertain what information was removed by the atmosphere.

    The bottom line is, yes, you can get some pretty good results with averaging software, but, if you have money to spend, the best images are going to be space based, and its still going to cost a billion dollars. Given the promise the heavens hold for the advance of human understanding, let alone essentially infinite resources, one only hopes that policy makers will not be mislead by the outrageous claim that one can get the best images from the ground. You can't. HST should not be thought of as an aberration made obsolete by adaptative optics or the low budget averaging. Low budget averaging and adaptive optics really need to be thought of as getting by until we can put larger, and better visible wavelength telescopes into space.

    Imagine what a space based Mt. Palomar sized mirror could do, if in space!
    • There's also the issue of deep sky surveys - these require looooooooooooong exposures. If you park your butt in front of a telescope in your backyard for a look through it, you will see more detail the longer you're parked. Your eye is able to pick out more information with longer exposure. So it is with imaging. Yeah, a really big mirror like on Palomar means you don't need to spend as much time imaging the same section of space as a smaller scope to capture equivalent detail, but here's the deal: the HST
    • by kindbud ( 90044 )
      First, the averaging algorithm is not without its flaws. They make the assumption that by averaging out a bunch of images, you eliminate distortion.

      No, they don't assume that. Their assumption is that an average of a bunch of images selected because they are probably sharper than average, will be sharper than an average of a totally random selection of images. And that is a sound assumption. The trick is in selecting images automatically that have a high probability of being sharper than average. A pers
  • Is the algorithm used to pick the best image, or part of an image open source?
  • by bit01 ( 644603 ) on Monday September 03, 2007 @11:08PM (#20459231)

    The technique they're using, while interesting, needs more justification.

    I'm wary when I see people doing any selection on random data because there's the problem of selection bias; throwing away the hundred results that don't match what they want and keeping the one that does. Just getting an image that seems plausible is not good enough.

    Their quality measure [cam.ac.uk] isn't one I'd use. They should be comparing the technique-plus-low-resolution-optics against high-resolution-optics directly. That is, doing image differencing of images taken at the same time and seeing what differences there are. They may well have good reason for assuming it's all okay but until somebody does that test they cannot assume they've removed all the variability that the atmosphere provides; there could be all sorts of hidden biases due to various atmospheric, molecular and statistical effects.

    ---

    "Intellectual Property" is unspeak. All inventions are the result of intellect. A better name is ECI - easy copy items.

    • we're taking pretty pictures of the sky not doing brain surgery.
    • by thePig ( 964303 )
      An interesting point -
      Since we are doing science, is it a good idea to throw away nonconformist images away as improper?
      Are we not bringing our own bias also to this? If we are only looking at what we expect to find and throw away the unexpected, wouldn't science take a hit?
    • Well, I guess that all depends on whether or not you can *prove* that the that you throw away is worth throwing away. If you take a thousand images of exactly the same thing over the course of an hour, and keep only the best images, that's a little different than taking toxicology readings from a thousand different patients and keeping only the best results. The cat's eye nebula isn't going to change measurably from our perspective over the course of an hour. If you keep careful documentation on what you do
    • It's not clear to me why they chose the image they did -- but the imager does much better (and appears to perform as well as the headline claims) in the M13 core -- check out the sample images at "http://www.astro.caltech.edu/~nlaw/lamp_pics/".
  • by edremy ( 36408 ) on Monday September 03, 2007 @11:22PM (#20459339) Journal
    As many have pointed out, there are a whole pile of applications that do the same thing for amateur telescopes. I've taken my Dad's 40-year-old 6" Dynascope, fixed up the motor drive, bought a $60 webcam (Philips SPC900), adapter and UV filter and gotten some quite nice photos of the Sun, the Moon, Jupiter and Saturn by capturing a few thousand frames and running them through Registax. (I'm working on Mars and Uranus- a whole lot harder with a small scope from a suburban backyard.)

    I'm curious though about how they deal with some of the "features" you get to see with this technique. It's *very* easy to stack a few hundred images, run Registax's sharpening filter and get some interesting pictures of stuff that doesn't really exist. I'm not sure I really trust the fine detail in my photos- unless I see it in another taken a few hours later it may well not be real.

  • by XNormal ( 8617 ) on Tuesday September 04, 2007 @02:34AM (#20460775) Homepage
    Even if this technique can eventually produce better pictures at lower cost it is still limited to wavelengths that can penetrate the atmosphere. Some of the most exciting recent discoveries are in infrared (Spitzer) and X-ray (Chandra). The next big telescipe (James Webb Space Telescope) is also for infrared.
  • and it's 50,000 times cheaper than Hubble

    That's a bit of a cheap shot. Hubble has been in operation for 17 years and has been a vital research tool. The tech for this new technique is, well, NEW.

  • by geowiz ( 571903 ) on Tuesday September 04, 2007 @03:40AM (#20461109)
    I invented this process in 1995.
    here is my original post on
    the sci.image.processing newsgroup
    my old email address is no longer active.
    new one is geopiloot at mindspring.com 9 reduce the numbers of ooo's in pilot to one
    it was ironic that many people jumped out to say it wouldn't work at the time.
    it does work and it works well. In fact most of the additive image processing now done by amateur astronomers everywhere using pc's software is based on my invention which I did not patent.

    George Watson

    From: George Watson (71360.2455@CompuServe.com)
    Subject: virtual variable geometry telescope
    This is the only article in this thread
    View: Original Format
    Newsgroups: sci.image.processing
    Date: 1995/12/11

    Has anyone implemented a virtual variable geometry telescope using
    only a CCD attached to a normal non variable telescope?

    It would work like this:

    Take extremely short duration images from the CCD at a frequency
    faster than the frequency of atmospheric distortion (1/60 sec I have
    read is the minimal needed timeslice for physically corecting
    atmospheric distortion in real time so maybe an exposure of 1/120 sec
    would be short enough).

    Choose via computer a high contrast image as a reference image.

    Continue to take rapid short duration images and keep only the high
    contrast ones with that have minimal displacement/offset from the
    reference image.

    Sum each of those acceptable images to a storage that will become the
    final image.

    What you should end up with is a final image that has minimal
    atmosperic based distortion because all the low contrast and non
    matching images will have been discarded.

    Obviously you build an image over a longer period of time than with
    real time optical correction but at perhaps lower cost.

    Anyone know whether this has been proposed/done or researched?

    --
    George Watson

    The opinions expressed here are those of the fingers
    of George Watson only; not those of George Watson himself.

    Please reply via this newsgroup. No Email unless requested,
    Thanks.

    View this article only
    Newsgroups: sci.space.policy
    Date: 1995/12/30
     
    • Looks like you thought about the idea way back in 1995. Would it qualify as invention, if you have not done the actual follow up work, crunch the numbers, did the heavy lifting to show it actually works?
    • Reinvention (Score:3, Insightful)

      by maroberts ( 15852 )
      http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/in dex.htm/ [cam.ac.uk] refers to a 1978 reference (Freid). It seems that some ideas keep popping up, only the technology actually available to do it has progressed from imaginary to real.
    • by sploxx ( 622853 )
      Hi, I know how it feels to be almost completely ignored (even here on /.) for a too long time when having a good idea. Kudos!

      I still think too many people's ideas are lost because people too often want to stay on the main path.

      For example, I myself thought (although surely not as the first person) about wireless self-organizing mesh networking (including car networks) a long time ago (must have been the modem days) - before it got popular/mainstream. People thought I was crazy.
  • Amateurs have been doing this for years with video cameras and then web cams. Registax is an one of the workhorse programs for automatically selecting frames from a digital video stream for stacking. There's a couple of others but that one's been the most popular for years.

If you have a procedure with 10 parameters, you probably missed some.

Working...