Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Space Math Science

Math to Crack Deep Impact Blurry Vision Problem 167

starexplorer writes "NASA announced that they believe they have a solution for the Deep Impact mission's blurry vision problem: math. Although the craft will still snap blurry pictures of the Tempel-1 comet, mathmetical manipulation will help scientists clear up the images once they make their way back to Earth. A special report and viewing guide are also available at SPACE.com."
This discussion has been archived. No new comments can be posted.

Math to Crack Deep Impact Blurry Vision Problem

Comments Filter:
  • OT (Score:5, Funny)

    by Anonymous Coward on Thursday June 09, 2005 @09:10PM (#12776413)
    using the same words, i made a much better headline.

    "blurry vision math to impact deep crack problem"
  • i know about this (Score:4, Informative)

    by Neotrantor ( 597070 ) on Thursday June 09, 2005 @09:10PM (#12776414)
    it's a process called deconvolution, right? I did this as a project for sophomore year astronomy... which i believe involved asking on slashdot about it.
  • I'm not sure if it was a photoshop plugin or a standalone filter, but the filter was able to derive sharp pictures from the bokeh of photographs.

    Essentially, it calculated the ring of blur and interpolated the data and was able to resolve out-of-focus areas. The sample photos were either of gorillas or pandas. I'm sure someone will have a link.

    Very space opera.
    • Yeppers (Score:2, Informative)

      Deconvolution [princeton.edu].

      FTFA: The team will use a process, called deconvolution, to remedy the situation. Deconvolution is widely used in image processing and involves the reversal of the distortion created by the faulty lens of a camera or other optical devices, like a telescope or microscope.
    • by onemorechip ( 816444 ) on Thursday June 09, 2005 @10:14PM (#12776790)
      The sample photos were either of gorillas or pandas.

      If you couldn't tell, then it must not have worked very well.

    • by grammar fascist ( 239789 ) on Thursday June 09, 2005 @11:58PM (#12777361) Homepage
      I'm not sure if it was a photoshop plugin or a standalone filter, but the filter was able to derive sharp pictures from the bokeh of photographs. ...and it's really not all that breakthrough-ish. Nearly anyone who's taken a signal processing class will have done this. The simplest version is an unsharp mask.

      Here's the basic idea: you assume some "spreading" of the data happened, and you assume its shape. Then you try to undo what happened - perform the inverse.

      There are two problems with this. First, the original convolution you assumed (that "spreading") is destructive to information. There exists no unique inverse mapping. You have to pick one, and hope that what it yields looks right.

      Second, without making some major assumptions (that signal processing people aren't usually keen to make) there is no way to differentiate between true signal and noise. The noise, along with the blurry edges, also get sharpened. You can mitigate this somewhat with your choice of inverse mapping. Again, you pick something that looks right.

      They do have some prior information going into this - they know the equipment that took the pictures - but pretty much nothing they do will exactly restore the information that was lost. Math isn't magical enough to do that.

      For the hardcore:

      http://mathworld.wolfram.com/Deconvolution.html [wolfram.com]

      and follow the links from there. :)
    • The link is here [logicnet.dk]. And it was Pandas (see bottom of page) :)

      I've downloaded this FREE software myself and had a play. It's quite impressive. It seems to work even better than the expensive FocusFixer [fixerlabs.com] plugin that's available for Photoshop.
  • by HidingMyName ( 669183 ) on Thursday June 09, 2005 @09:16PM (#12776448)
    The early Hubble pictures suffered from optical distortion due to a miscalculation on what the shape of the mirror would be in obit, and NASA also fixed that problem using digital image filtering techniques to reconstruct a clear image. The key was that they had a precise model of the distortion and that it was invertible.
    • Errr... They did do a $$$ special mission to retrofit secondary optics to make the pictures sharp again, so I guess the calulation solution only worked marginally at best.
      Or maybe they didn't have the computing power back then? Hmmm...
    • The early Hubble pictures suffered from optical distortion due to a miscalculation on what the shape of the mirror would be in obit, and NASA also fixed that problem using digital image filtering techniques to reconstruct a clear image. The key was that they had a precise model of the distortion and that it was invertible.

      While there may have been an issue with that (which I've never heard of), the infamous Hubble mirror problem was that Perkin-Elmer built the mirror wrong due to a flawed instrument, an

      • I originally posted from memory and didn't take the time to do an on-line search. While there may have been an issue with that (which I've never heard of), the infamous Hubble mirror problem was that Perkin-Elmer built the mirror wrong due to a flawed instrument, and ignored the other instruments that were telling them it wasn't the right shape. Some details on the measuring error you mention are given here [uoguelph.ca] and a mention of a misplaced measuring rod cap [chron.com] is discussed here.
    • NASA had standard image signal checking software which not only detected this problem but allowed the mathematical model for the devolution. It's been retrofitted and utilized elsewhere.
  • by Anonymous Coward on Thursday June 09, 2005 @09:17PM (#12776461)
    Although the craft will still snap blurry pictures of the Tempel-1 comet, mathmetical manipulation will help scientists clear up the images once they make their way back to Earth.

    Scientists will also use Photoshop to remove any zits, butt dimples, and eyebags the comet may be suffering from.
  • Hmm... (Score:5, Funny)

    by iamdrscience ( 541136 ) on Thursday June 09, 2005 @09:17PM (#12776463) Homepage
    Math in space you say? What will they think of next?!
  • by dtfinch ( 661405 ) * on Thursday June 09, 2005 @09:21PM (#12776494) Journal
    Deconvolution has been around for many decades.
  • by LiquidCoooled ( 634315 ) on Thursday June 09, 2005 @09:29PM (#12776547) Homepage Journal
    Tilt your head to the side and Squint a bit!
    • Tilt your head to the side and Squint a bit!

      Actually, I got an excerpt from the manual:

      1) In order to properly vire the affected images, the user must initially realign the ocular viewing aparatus. The vector between the two ocular elements should be nonperpendicular to the local ground plane. After step B has concluded, a conditional feedback decision loop can be entered, whereby the user may adjust the angle between the vector between the two occular elements and the local ground plane (LGP) for op

  • The solution to my blurry vision problem is to keep the number of vodka-sodas in the single digits.

    Damn I love coding loaded: Best. Comments. Ever.
  • iEyes? (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Thursday June 09, 2005 @09:35PM (#12776588) Homepage Journal
    Why don't we have adaptive image processing "glasses"? Can't some human vision problems be corrected by preprocessing an image, to "antidistort" it? The inverse distortion from the vision defect would return the image to "normal". Such a device could be recalibrated with test targets, so a wearer wouldn't need to consume valuable optometrist time for revised prescriptions. With some work, they could become light enough that they'd rival lenses, or even surpass them in some real coke-bottle cases. And we'd have a huger market for info display goggles.
    • Re:iEyes? (Score:2, Insightful)

      by Anonymous Coward
      Umm, they're called contact lenses and glasses. They distort an image which your eye antidistorts back to the original image.
      • Yeah...but they're pretty much only good for lack-of-cous problems. Astigmatism can be fixed too, but there are several other image-distortion disorders that aren't so easily remedied. I have keratoconus [kcenter.org], a disorder where the cornea thins and begins to bulge out, causing severe visual distortion that can't be corrected with glasses. Rigid contacts can help by7 providing a smooth surface to the front of the eye, but after about three years in them, my eyes dried out something awful and I don't usually wear t
    • AFAIK this is how adaptive LASIK works - they pass light through your eye to measure the distortions and then calculate how to do the laser shaping so your cornea (sp?) will compensate for the shape of your eye. Apparently, it works - many of my friends competing for military pilot training slots have had this done and report 20/15 or 20/10 after having glasses since the adaptive optic eliminate the eyes natural imperfections as well.
    • I remember reading about such a setup in New Scientist a year or two ago. The equipment continually monitored the shape of the eye, which apparently changes multiple times per second, and corrected for it.

      The upshot was that they could take just about anyone, apply this equipment, and they ended up with vision 4 times sharper than "perfect" - (20/5?)

      The downside was that the equipment took up a large lab bench... Hopefully one day they'll be able to shrink it down.

      We're already close to glasses that can
  • You know, the physicist [msn.com] who as a kid in the neighborhood could "fix radios by thinking."
    • Holy shit! Did you just post a link to a Microsoft owned website?!?!


      Just kidding, but here's another link to what seems to be a /. friendly site:

      Richard Feynman [wikipedia.org]

  • If NASA were smart and hired poets, they would just look at the blurry images and say, "Interesting".
  • How does this math work? All the article really tells me is that its math.

    They also claim "deconvolution" can improve the resolution of a good telescope. Why? Wheres the extra data come from?

    what the heck is this?
    • In a nutshell:

      Imagine an out-of-focus picture of a point of light. The image will be a fuzzy circle or ring (the latter if the lens is catadioptric [wikipedia.org]).

      Now take a picture of an entire scene, this time in focus. If you convolve (mathematical process related to multiplication) the first fuzzy image with this sharp image, you would get an image that looks like you had taken the picture through the original fuzzy lens. It's as if every single pixel in the good image were smudged into an pattern like the first

    • In a nutshell (i'm no good at math), convoluting data means transforming it with a function; deconvoluting is the reverse process. If you know how a faulty lens, f.ex., messes with the image captured by a camera, you can create a mathematical model for it and then reverse the process.
      • There are clear limitations with deconvolution. First off, it results in a non-unique solution for all but the simplest cases. That means that if you have two point sources in an image that now overlap due to the size of the PSF (basically, all normal images except for a few star fields), you can't really determine where the energy actually belongs. Plus, any noise in the image throws off the deconvolution. There's also a number of different methods, and you need to pick the right one for the job. Also
      • For what it's worth, the correct conjugations are "convolving" and "deconvolving", even though the process is convolution and deconvolution.
    • If you can esitmate the blur, or let's say, the point spread function (PSF) of the blur, deconvolution is the application of the inverse of the said blur.
      This is not always a simple operation. Most real world blur PSFs will not be invertible, or easily so, and the inverse operation will be unstable (lead to "blowing up" of teh function). Conditioning may solve some suc problems.
      Iterative techniques are useful in many cases and there are many varied different techniques to do this.
      Wiener filters are co
  • by Chris Snook ( 872473 ) on Thursday June 09, 2005 @09:56PM (#12776705)
    Years ago I tried to warn people that Tempel 1 was an alien monitoring post, and that it we needed to study it to discover their origins so we could be vigilant for their return. I was locked up for years. Now that I've escaped I find that they're smashing a rocket into it! While this at least proves I wasn't crazy, it's not going to help anything. Any civilization that has the technology to maintain a link to an outpost in a remote star system without it being detected by civilian scientists probably has the ability to defend itself against what it would probably perceive as aggression. While I'd like to believe that their advances have made them peaceful and even merciful, recent events on Earth suggest that the best we can hope for is millenia of enslavement.
  • "The table-sized, 820-pound (372-kilogram) impactor is scheduled to smash into the comet's nucleus at 23,000 mph (37,000 kilometers) per hour"
  • enhance...enhance...
  • by mnmn ( 145599 ) on Thursday June 09, 2005 @09:59PM (#12776724) Homepage
    "We will alter images to make them clear"

    -NASA

    My answer: no WAY! Really?

    After spending the millions and waiting for years, isnt it a LITTLE apparent that work will be done on images to make them clear? Does it require a press conference to announce the very apparent?

  • by Andyvan ( 824761 ) on Thursday June 09, 2005 @10:06PM (#12776751)
    They're always able to make blurry photographs sharp, and it only takes about 10 seconds...

    -- Andyvan
  • by Isldeur ( 125133 ) on Thursday June 09, 2005 @10:13PM (#12776785)

    Oh give it up. This is so OLD. I've seen this "picture enhancement" being used in the movies all the time. You know, when there's this blurry picture and then suddenly it's "enhanced" and is crystal-clear?

    Or on that Alias documentary where the CIA didn't have an audio feed so they had this program that would decipher words by lip reading at this obscene angle from a camera on the ceiling?? This stuff is so easy these days...

    You'd think NASA would have this down pat... Maybe it's the budget cuts...

  • I found that if you take the film out of the camera after the picture is taken, and then either blow on it or flop the picture back and forth irt will make it develop far quicker and much clearer.
    Maybe we could get some of the aliens from area 51 to hitchhike onboard and take care of that for us!
  • In related news... (Score:5, Informative)

    by Lisandro ( 799651 ) on Thursday June 09, 2005 @10:32PM (#12776900)
    ... witty scientists have found out math can also be used to design stuff, balance your checkbook, convert inches to meters and other everyday problems unsolvable without its magic!

    PS: As others pointed out, deconvolution [wikipedia.org] (which is the process used here) is not a new concept. Far from it, in fact.
    • by Dunbal ( 464142 )
      The problem is, to use math to

      a) balance your checkbook, you sort of have to know how much was in your account to start with, and how much you spent or have left;

      b) convert inches to meters, you have to have some idea of how many inches you want to convert, as well as remember the conversion factor.

      Now when you're taking a picture of your mom, and it turns out blurry, you can use any mathematical process to alter the image to your liking, and you will stop when the image SUBJECTIVELY looks good to you -
      • Actually, it doesn't work that way... what makes deconvolution work is that the anomaly is predictable - you can model it. Then, you can reverse the process so you can get a reasonable ammount of information out of the original image.

        Think of it this way; i send you a set of blurry pictures. You think they're no good; but now i tell you i only applied a Photoshop blur filter with a value of "4" to them and then a messed with the colors in a certain way. With that, you could reverse the process fairly we
        • The problem is I can't tell you if they're good or not because I have no idea. I don't know what the data is supposed to look like. Oh I have a fair idea, but if I knew exactly what it looks like there sort of wouldn't be any point in sending a probe there now would there.

          No, think of it in an abstract manner. You are collecting data, with a camera. Now your camera is somehow flawed for whatever reason. So your data collection process has a degree of error.

          You are telling me that you can "compensate" for
          • Not really. If you know exactly how the camera's flawed, then you can correct them on the returned data and get a perfect image, as if wasn't broken. Thing is, we don't, and deconvoluting uses an approximate model - which works good, but it's not as good as the real thing. In that sense, you're right, but information gathered with this process is still useable.
            • Not really. If you know exactly how the camera's flawed, then you can correct them on the returned data and get a perfect image, as if wasn't broken. Thing is, we don't, and deconvoluting uses an approximate model - which works good, but it's not as good as the real thing. In that sense, you're right, but information gathered with this process is still useable.

              Still useable, but not complete. Even if you know exactly how the camera is flawed, your inverse process still doesn't have a unique solution. It
              • And you say it's possible to pick one and declare that it's correct?

                Nope, never did. Like you said, most convolutions are not biyective (that's it, there will be overlapping results), hence deconvolution will never be able to give the original result, 100% correct.

                Again, for a number of practical purposes, deconvolution can enhance a picture enough so it becomes useable. Within its limitations.
  • by kiddailey ( 165202 ) on Thursday June 09, 2005 @11:10PM (#12777147) Homepage

    Will NASA be allowed to use a calculator [slashdot.org] to solve the math problem? ;)
  • by cahiha ( 873942 )
    I worry when people say things like "mathematics solved problem X", because people often think of pure mathematicians and mathematics departments.

    Deconvolution was pioneered by mathematicians like Wiener nearly a century ago, but academic fields have shifted and split since then. These days, this kind of work would more likely be carried out in an applied math, electrical engineering, statistics, or computer science department than a pure mathematics department (some mathematics departments cover applied
  • by Ed Avis ( 5917 ) <ed@membled.com> on Friday June 10, 2005 @03:37AM (#12778139) Homepage
    You can try this at home with the Gimp: Refocus [sourceforge.net].
  • Deconvolution (Score:2, Interesting)

    by Arthur B. ( 806360 )
    First of all, blurring is not a fully reversible process. If you convolute two signals, the smoothness of the convolution is essentially the smoothness of the smoothest signal ( can you say that rapidly ? ). Smoothing means convoluting with a smooth kernel, for exemple a gaussian (gaussian blurring). If you deconvolute it you will sharpen the image but keep the smoothness so information IS lost.

    Now it can give good results... the most common deconvulting filter is <a href="http://en.wikipedia.org/wiki/W

    • Now the question is, if deblurring can be performed with deconvolution, how can my brain not learn to do it ! After all, my eyes are just unfocused so the compensation created by my lenses could be performed by my brain...


      What? Your brain doesn't do that? I thought everyone's did.
  • Try it yourself! (Score:4, Informative)

    by nmg196 ( 184961 ) * on Friday June 10, 2005 @07:16AM (#12778709)
    You know in films when they get a really burry satellite image, and some hero guy goes "can we enhaance thaaat?". So some geek clicks a button and it goes a lot sharper, and you're thinking, "if only that worked in real life". Well it does and you can try it yourself. Here is some free software [logicnet.dk] that allows you to have a play and "enhance" all those blurry pics you have lying around.

    I've tried this myself and it works quite well. I tried it on a picture I took of the moon with a 400mm lens and it made quite an impressive difference.
  • Is the idea of finding math. :) I love the article summary that makes it sound like NASA just sort of lobbed the thing into space, found it had blurry vision, and started looking an old drawer in the corner of the lab: "History. Nope, not helpful. Biology? Nope, no use. Psychology? We'd better send that one to the public relations dept. Math. Hey, that might be a good idea..."
  • That is not math. It is digital signal processing. Math is about proofs of theorems. Digital signal processing is what the name describes ;) I hate it when they won't give credit for the field that really researches these problems.
  • Oh. I thought it said METH to Crack... And the headline still made sense.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...