Sharpest Images With "Lucky" Telescope 165
igny writes "Astronomers from the University of Cambridge and Caltech have developed a new camera that gives much more detailed pictures of stars and nebulae than even the Hubble Space Telescope, and does it from the ground. A new technique called 'Lucky imaging' has been used to diminish atmospheric noise in the visible range, creating the most detailed pictures of the sky in history."
Lucky Imaging (Score:5, Insightful)
This technique is often used by amateur astrophotographers using newer CCD cameras and even webcams. Astronomy Picture Of the Day http://antwrp.gsfc.nasa.gov/apod/astropix.html [nasa.gov] is a great site to see this stuff. I haven't checked Googles pictures, but I am sure that there would be a number of them there, too.
The quality of some of these photos is amazing.
davel
Re: (Score:1)
CCD cameras need not all cost £££ or $$$. I'm in the midst of converting a Philips SPC900NC to an astro imaging camera. Alas, I don't think I'll finish in time for a trip with the scope to high elevation next weekend.
Re: (Score:2)
Are you familiar with any shift-and-add or automated lucky i
Re:Lucky Imaging (Score:5, Interesting)
the wikipedia entry on this subject http://en.wikipedia.org/wiki/Lucky_imaging [wikipedia.org] states that new procedures take, '... advantage of the fact that the atmosphere does not "blur" astronomical images, but generally produces multiple sharp copies of the image'.
Does the correction algorithm apply a single vector to each image (ie the entire frame is shifted in unity) to produce the composite, or is a vector field applied to every pixel point in the image to shift individually the pixels toward their correct centres? Also if it is pointwise what type of transform is being applied, affine , perspective etc.
Re: (Score:2)
Re:Lucky Imaging (Score:5, Informative)
So it looks like each frame is shifted as a whole rather than each individual pixel. Which makes sense from the description of the process, since the theory is that the images you're picking in the Lucky Imaging technique are high-quality images with a random offset due to the atmosphere.
Re: (Score:3, Informative)
http://www.astronomie.be/registax/html/multi_oper
Re: (Score:2)
If they can do this from earth... (Score:5, Interesting)
Re:If they can do this from earth... (Score:5, Funny)
Re: (Score:3, Insightful)
That's adaptive optics. 'Lucky imaging' looks to be something different. Sounds like Lucky Imaging tries to catch and merge portions of the image that occasionally, by chance, make it through the ever changing atmosphere with minimal distortion.
But I think that the answer to the original question is probably still 'No" It doesn't sound like Lucky Imaging per se is an answer to the q
Re: (Score:3, Informative)
Lucky Imaging relies on the fact that every so often, a really high-quality image makes it through the atmosphere almost unperturbed (based on the Kolmogorov model [cam.ac.uk] of turbulence). While I don't know whether the same model can be applied to cosmic gas clouds, there may be another model that could accurately model the phase distortions those clouds impress upon a wavefront.
To achieve this one must take many very short-exposure (compared to the time-scale of atmosp
Re:If they can do this from earth... (Score:4, Insightful)
No, and this is why. (Score:3, Interesting)
No, the images we get right now from space telescopes are the best we can get at any given epoch, and that's just the way it is.
Re: (Score:2)
[TMB]
But surely... (Score:1)
can't they use the same techniques with the HST itself?
Re:But surely... (Score:5, Insightful)
Re: (Score:2, Interesting)
Re:But surely... (Score:5, Informative)
Additionally, while they don't mention details in the article, I presume they have a specially designed camera. This is an old technique, but it's generally limited to very bright objects due to something called readout noise. Basically all CCD's produce an additional signal due to the process of reading out the data. This limits the effectiveness of repeated short observations to sources which are much brigher than this noise, since the noise also grows linearly with the number of images taken.
To image distant galaxies you typically have to take exposures of one to several hours, and thus this technique isn't useful.
Doug
Re:But surely... (Score:5, Informative)
They are using a new kind of CCD that somehow lowers the noise floor. Details are at:
http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/L
In fact this site (same basic place) is much more informative than the press release and answers a lot of questions:
http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/i
Re: (Score:3, Informative)
Now there is a new type of CCD with a built in digital signal multiplier that precedes the readout step in each individual pixel. You can simply select an approp
Re: (Score:2)
Either one is enhanced by 'stacking' images and processing a bunch of then. This is because the S/N ratio improves with additional images added to the stack.
The Signal increases by the square of the number of images, while the Noise increases by the sum of the number of images - so the Signal increases faster than the N
Re: (Score:2)
As far as I know (I'm just a theorist), it is impossible to remove readout noise, since it is affected by the actual distribution of electrons on the ccd image. Someone who actually deals wi
Re: (Score:1)
"The images space telescopes produce are of extremely high quality but they are limited to the size of the telescope," Dr Mackay added. "Our techniques can do very well when the telescope is bigger than Hubble and has intrinsically better resolution."
Re: (Score:2)
Exposure Time? (Score:3, Insightful)
Re:Exposure Time? (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Now, if you're basing a real signal as being above some threshold, and noise as being below, then you need only one exposure, if the signal is present in that exposure. Otherwise, just keep snapping frames. No different than exposing film or a long-exposure sensor for a longer time.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Expose for 60 min, giving (at 20 exposures per second) 72000 exposures.
Pick only the best 1000 exposures, keeping only the best, sharpest, clearest 1.4%, or getting rid of the worst 98.6%
Run them through your imaging algorythm.
You now have a 50 second exposure without all the blurring, distortion, and general cruft you threw away with the 98% (of the total) crap exposures you got rid of.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Seems like 1/20th of a sec wouldn't cut it for all but the brightest objects.
One of the short texts below the two initial articles says that it's a new camera capable of detecting individual photons:
This new camera chip is so sensitive that it can detect individual particles of light called photons even when running at high speed. It is this extraordinary sensitivity that makes these detectors so attractive for astronomers.
Unfortunately it doesn't give any details on how much light is needed compared to other techniques.
Compared to adaptive optics? (Score:5, Informative)
Overall, a fairly clever technique. I wonder how this compares to adaptive optics [wikipedia.org], which is another solution to this problem. In adaptive optics, a guide laser beam is used to illuminate the atmosphere above the telescope. The measured distortion of the laser beam is used to distort the imaging mirror in the telescope (usually the mirror is segmented into a bunch of small independent sub-mirrors). The end result is that adaptive optics can essentially counter-act the atmospheric distortion, delivering crisp images from ground telescopes.
I would guess that adaptive optics produces better images (partly because it "keeps" all incident light, by refocusing it properly, rather than letting a large percentage of image acquisitions be "blurry" and eventually thrown away), but adaptive optics are no doubt expensive. The technique presented in TFA seems simple enough that it would be added to just about any telescope, increasing image quality at a sacrifice in acquisition time.
Re: (Score:2)
Re: (Score:2)
Both are employed pretty heavily by advanced "Amateur" astronomers. I put amateur in quotes because people at the high end of the hobby may have setups costing $50,000-$100,000+ dollars, going up to as much as people are willing to spend. There are several companies (http://www.sbig.com/ [sbig.com] for example) that specialize in producing imaging equipment and software for these setups. It's pretty amazing what these people are able to do.
I attended a lecture a year or two ago by a respected academic in adaptive optics (Chris Dainty [www.ucg.ie], for the curious). He described efforts to put together an AO kit for amateur astronomers. I think he said that he wasn't able to get it under a few thousand Euro. It's not a cheap science, for sure.
Re:Compared to adaptive optics? (Score:5, Informative)
Re: (Score:3, Informative)
No, they propose that it be used together with adaptive optics. The research that was done to produce this press release was actually done at the Mount Palomar observatory, which was completed in 1947 [caltech.edu] and most certainly does not feature adaptive optics.
From the article:
The technique could now be used to improve much larger telescopes such as those at the European Southern Observatory in Chile, or the Keck teles
Re: (Score:2)
The two techniques are unrelated; either one or both at the same time can be used to improve the images. Actually, the sample images from the original article were taken through a telescope (Palomar) using basic adaptive optics to improve the image before the "lucky" software even saw the data.
As you suggest, this also works with sub-sections of the image. I saw this same tech
Re: (Score:2)
Using this methodology, a large ground based telescope can easily achieve better imaging than the Hubble, a
You Too Can Get Lucky. (Score:5, Informative)
DIY [cam.ac.uk].
Re: (Score:2, Informative)
This is indeed no news to amateur astronomers. This technique has been used extensively by planetary imagers in recent years to take amazing photos of Jupiter, Mars and Saturn. The basic tools are a good webcam to take AVI files and Registax to proccess the frames. Take a look to Damien Peach's best images [damianpeach.com].
As for pro, there is even an article in Wikipedia about it: Lucky imaging [wikipedia.org]: "Lucky imaging was first used in the middle 20th century, and became popular for imaging planets in the 1950s and 1960s (using c
Re: (Score:2)
Spider-sense (Score:5, Interesting)
You do it too (Score:2)
Re: (Score:2)
I wonder about the uncanny valley (Score:2)
The thing is, if you carefully cherry-pick your examples, and/or are allowed to hand-wave where any given example should fall, you can convincingly argue the uncanny valley effect. But the problem is when you anchor two examples which should, for example be in the valley, yet a third in the middle is not. Although by the shape of it, the third should be there too.
For example, the FF movies were sup
Re: (Score:2)
Errors? (Score:2)
Re: (Score:2)
There are things this wouldn't be useful for. Mainly anything that might be cha
Re: (Score:2)
The principal is that by taking lots of pictures of the same thing, you can correct the error. The larger the sample you take, the closer you get to the true image. For error to be amplified you would almost need the same random dust particle arrangement from the telescope to the edge of the atmosphere in a significant sample of the images, which is very unlikely.
Of course you probably understand that.
In answer to your actual qu
Dr. Mackay? (Score:3, Informative)
Many amateurs already do this (Score:5, Informative)
Look at the planetary images here [imeem.com] for my attempts at this technique.
Re: (Score:2)
However, it's worth noting that amateurs' results today are typically much better than those of professional astronomers 30 or even 20 ye
Email from the principle investigator (Score:2, Informative)
Mod parent up! VERY informative. (Score:2)
Eivind.
Comparison to hubble... (Score:5, Informative)
Probably they can push their technique harder than this initial image suggests (it was mainly comparing the "lucky" image with a conventional, blurry, ground-based image)... But I just thought it would be good to show Hubble's pictures alongside.
Re: (Score:2)
1. Slashdot introduction summary should say not as good as Hubble (HST). Instead mistakenly says better than Hubble.
2. The TFA linked site should (chuckle) show Hubble pictures along with the other ground based pictures.
God bless John Grunsfeld and the other NASA space walking astronauts who fix HST. Also the vast supporting cast for those missions.
Thanks for the update.
Jim
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
But my eyes tell me they haven't removed all atmospheric effects yet, and their words aren't c
Ugg, Background (Score:2)
50,000 times cheaper, so what (Score:2)
Re: (Score:2)
Re: (Score:2)
Not convinced by TFA (Score:5, Interesting)
Re: (Score:2)
The Hubble images probably resolve fainter objects but the Lucky images are sharper. Sharpness means resolution of distinct objects is better. The Hubble may see more while the Lucky sees them sharper but misses out on faint objects. The big question for me is how good the Earth-based telescope is at picking up faint images, which appears to Hubble's strength. The Hubble Telescope can peer at an object for hours at a time with an open aperture. A ground-based telescope cannot because the
Re: (Score:2)
Basically TFA is loaded. The photos they provide pale in comparison to Hubble shots. At the moment, the article is tripe. Let's see if things improve.
Let's see it beat Hubble at: (Score:3, Interesting)
http://hubblesite.org/ [hubblesite.org]
There's a number of excellent Hubble images of just about everything in our solar system to the most distant galaxies.
I would put my money on Hubble, for two reasons.
First, the averaging algorithm is not without its flaws. They make the assumption that by averaging out a bunch of images, you eliminate distortion. For this to work, you have to assume that the probability of a particular pixel being in the right spot is higher as the distortion would essentially be random, and that could theoretically not be the case. If the distortion is completely random, then, averaging a set of images would essentially lose the pixel that is being pushed around its "real" spot by the atmosphere, and you can actually see that, as the corrected images still look muddy compared to their HST or even adaptive optic counterparts.
Secondly, the atmosphere doesn't just distort light, it also filters it. You can use averaging to remove distortion "noise", but, there's really no way to ascertain what information was removed by the atmosphere.
The bottom line is, yes, you can get some pretty good results with averaging software, but, if you have money to spend, the best images are going to be space based, and its still going to cost a billion dollars. Given the promise the heavens hold for the advance of human understanding, let alone essentially infinite resources, one only hopes that policy makers will not be mislead by the outrageous claim that one can get the best images from the ground. You can't. HST should not be thought of as an aberration made obsolete by adaptative optics or the low budget averaging. Low budget averaging and adaptive optics really need to be thought of as getting by until we can put larger, and better visible wavelength telescopes into space.
Imagine what a space based Mt. Palomar sized mirror could do, if in space!
Re: (Score:2)
Re: (Score:2)
No, they don't assume that. Their assumption is that an average of a bunch of images selected because they are probably sharper than average, will be sharper than an average of a totally random selection of images. And that is a sound assumption. The trick is in selecting images automatically that have a high probability of being sharper than average. A pers
Surprised this hasn't been asked yet here (Score:1, Redundant)
Interesting but picture quality unjustified (Score:5, Insightful)
The technique they're using, while interesting, needs more justification.
I'm wary when I see people doing any selection on random data because there's the problem of selection bias; throwing away the hundred results that don't match what they want and keeping the one that does. Just getting an image that seems plausible is not good enough.
Their quality measure [cam.ac.uk] isn't one I'd use. They should be comparing the technique-plus-low-resolution-optics against high-resolution-optics directly. That is, doing image differencing of images taken at the same time and seeing what differences there are. They may well have good reason for assuming it's all okay but until somebody does that test they cannot assume they've removed all the variability that the atmosphere provides; there could be all sorts of hidden biases due to various atmospheric, molecular and statistical effects.
---
"Intellectual Property" is unspeak. All inventions are the result of intellect. A better name is ECI - easy copy items.
don't worry (Score:2)
Re: (Score:2)
Since we are doing science, is it a good idea to throw away nonconformist images away as improper?
Are we not bringing our own bias also to this? If we are only looking at what we expect to find and throw away the unexpected, wouldn't science take a hit?
Re: (Score:2)
Poor choice of demo image -- M13 much better (Score:2)
Common use with amateurs, but has issues (Score:4, Informative)
I'm curious though about how they deal with some of the "features" you get to see with this technique. It's *very* easy to stack a few hundred images, run Registax's sharpening filter and get some interesting pictures of stuff that doesn't really exist. I'm not sure I really trust the fine detail in my photos- unless I see it in another taken a few hours later it may well not be real.
Space-based telescopes aren't dead yet... (Score:3, Informative)
Does everything have to include jackassery? (Score:2)
and it's 50,000 times cheaper than Hubble
That's a bit of a cheap shot. Hubble has been in operation for 17 years and has been a vital research tool. The tech for this new technique is, well, NEW.
i invented the lucky telescope concept in 1995. (Score:4, Interesting)
here is my original post on
the sci.image.processing newsgroup
my old email address is no longer active.
new one is geopiloot at mindspring.com 9 reduce the numbers of ooo's in pilot to one
it was ironic that many people jumped out to say it wouldn't work at the time.
it does work and it works well. In fact most of the additive image processing now done by amateur astronomers everywhere using pc's software is based on my invention which I did not patent.
George Watson
From: George Watson (71360.2455@CompuServe.com)
Subject: virtual variable geometry telescope
This is the only article in this thread
View: Original Format
Newsgroups: sci.image.processing
Date: 1995/12/11
Has anyone implemented a virtual variable geometry telescope using
only a CCD attached to a normal non variable telescope?
It would work like this:
Take extremely short duration images from the CCD at a frequency
faster than the frequency of atmospheric distortion (1/60 sec I have
read is the minimal needed timeslice for physically corecting
atmospheric distortion in real time so maybe an exposure of 1/120 sec
would be short enough).
Choose via computer a high contrast image as a reference image.
Continue to take rapid short duration images and keep only the high
contrast ones with that have minimal displacement/offset from the
reference image.
Sum each of those acceptable images to a storage that will become the
final image.
What you should end up with is a final image that has minimal
atmosperic based distortion because all the low contrast and non
matching images will have been discarded.
Obviously you build an image over a longer period of time than with
real time optical correction but at perhaps lower cost.
Anyone know whether this has been proposed/done or researched?
--
George Watson
The opinions expressed here are those of the fingers
of George Watson only; not those of George Watson himself.
Please reply via this newsgroup. No Email unless requested,
Thanks.
View this article only
Newsgroups: sci.space.policy
Date: 1995/12/30
Re: (Score:2)
Reinvention (Score:3, Insightful)
Re: (Score:2)
I still think too many people's ideas are lost because people too often want to stay on the main path.
For example, I myself thought (although surely not as the first person) about wireless self-organizing mesh networking (including car networks) a long time ago (must have been the modem days) - before it got popular/mainstream. People thought I was crazy.
Common practice among amaterus for years (Score:2)
Re:Yawn (Score:4, Informative)
Re: (Score:2)
As everyone knows, and for those that do not, infra red wavelengths are absorbed by water vapor.
Keeps us toasty at night, but sadly blocks the infra red for observations at the same time.
-Hack
Re:Yawn (Score:4, Informative)
Far infrared is a different story, and you're absolutely correct there.
Re:Yawn (Score:4, Informative)
See here [noao.edu], for example, for more information.
There are wavelength ranges in the NIR where the atmosphere is indeed transparent (J,H and K bands, for example); but the atmosphere is opaque at most NIR wavelengths (and, even at those IR wavelengths where the atmosphere is transparent, the transmittance is lower than at visible or radio wavelengths). See here [caltech.edu] for more info.
Re: (Score:2)
I'm holding out for besting Hubble in ultraviolet.
Pass the SPF-100, wouldja?
Re: (Score:2)
Re: (Score:2)
The article is comparing the images produced and using the price as a basis. Aperture isn't
Re: (Score:2)
Re: (Score:1)
Blue Peter for non-Brits (Score:5, Informative)
using 'Blue Peter' technology
Blue Peter [bbc.co.uk] is a BBC childrens show. Blue Peter Technology is effectively something so simple a child could do it.
Re: (Score:3, Informative)
Not quite squinting, but still an eye trick
Re: (Score:2)
Well, I thought it was a funny comment, anyway.
Re: (Score:2)