Forgot your password?
typodupeerror
Math Science

Recovering Data From Noise 206

Posted by kdawson
from the sparse-world-after-all dept.
An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy, incomplete inputs. "[The inventor of CS, Emmanuel] Candès can envision a long list of applications based on what he and his colleagues have accomplished. He sees, for example, a future in which the technique is used in more than MRI machines. Digital cameras, he explains, gather huge amounts of information and then compress the images. But compression, at least if CS is available, is a gigantic waste. If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ... The ability to gather meaningful data from tiny samples of information is also enticing to the military."
This discussion has been archived. No new comments can be posted.

Recovering Data From Noise

Comments Filter:
  • Why not... (Score:5, Insightful)

    by jbb999 (758019) on Tuesday March 02, 2010 @09:19AM (#31328778)

    If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..

    Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.

  • by Chrisq (894406) on Tuesday March 02, 2010 @09:25AM (#31328826)
    From TFA

    The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what’s called sparsity, a measure of image simplicity.

    The thing is in a medical image couldn't that actually remove a small growth or lesion? I know the article says:

    That image isn’t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Candès and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.

    but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.

    on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.

  • Re:Why not... (Score:5, Insightful)

    by Chrisq (894406) on Tuesday March 02, 2010 @09:28AM (#31328856)
    I think you are missing the point, throwing away 90% of the image was a demonstration of the capabilities of this algorithm. You would use it where you have only managed to capture a small amount of data, not capture the lot and throw away 90%.
  • Demo image (Score:4, Insightful)

    by ChienAndalu (1293930) on Tuesday March 02, 2010 @09:29AM (#31328864)

    I seriously doubt that the Obama demo image is real. There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.

  • by Yvanhoe (564877) on Tuesday March 02, 2010 @09:30AM (#31328872) Journal
    Exactly. This algorithm doesn't create absent data nor does it infer it, it just makes the uncertainties it has "nicer" than the usual smoothing.
  • by damn_registrars (1103043) <damn.registrars@gmail.com> on Tuesday March 02, 2010 @09:58AM (#31329140) Homepage Journal
    Did we really need to refer to it as CS in the summary? A quick glance of the summary could lead one to think that this guy is the inventor of Computer Science, rather than the correct Compressed Sensing... In the summary of an article that is concerned (in part) with maintaining information after compression, we lost quite a bit of information in abbreviating the name of his algorithm.
  • Re:Questions... (Score:3, Insightful)

    by azaris (699901) on Tuesday March 02, 2010 @09:59AM (#31329158) Journal

    Does this only apply to image data, or will we be able to use this to clean up other databases? Will it work with sampled sounds? Names and addresses and inventory?

    Of course not. It's not magic. There are certain assumptions that can be made about most real-life images, mainly that they have small total variance. That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps (like a cartoon image would have).

    Though this method uses the l_1 norm and not total variation.

    More importantly, HOW does it work?

    See here [arxiv.org].

  • by ceoyoyo (59147) on Tuesday March 02, 2010 @10:04AM (#31329206)

    The description of the algorithm in the article is quite poor. To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real. You often also require that the total variation of the image should be as low as possible as well.

    Since the image is acquired in the Fourier domain, every measurement you make contains information about all the pixels in the image. For reasonable* under acquisitions CS can produce a perfectly reconstructed image.

    * the exact limits of "reasonable" are still under investigation, but typically you only need to acquire about a quarter of the data to be pretty much guaranteed you'll be able to get a perfect reconstruction.

  • Re:Demo image (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 02, 2010 @10:32AM (#31329548)

    It absolutely could be, just read the article: "Eventually it creates an image that will almost certainly be a near-perfect facsimile of a hi-res one."!

  • Not smoothing (Score:5, Insightful)

    by nten (709128) on Tuesday March 02, 2010 @11:08AM (#31329996)

    The article was a bit poor. The data sets aren't really incomplete in most cases. They only seem that way from a traditional standpoint. The missing samples often contain absolutely no information, in which case the original image/signal can be reconstructed perfectly. In brief, nyquist is a rule about sampling non-sparse data, so if you rotate your sparse data into a basis in which it is non-sparse, and you satisfy the nyquist rule in that basis (though not in the original one), you are still fine.

    I like this link better l1 magic [caltech.edu]

  • Re:Why not... (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 02, 2010 @11:47AM (#31330512)

    what's wrong with killing people? There are too many here anyway, you and me included...

  • Re:Why not... (Score:2, Insightful)

    by Bigjeff5 (1143585) on Tuesday March 02, 2010 @03:09PM (#31333658)

    I just want to point out that everything tied to the government is dead weight. The military is one of the only truly necessary endeavors the government pursues that actually helps the economy. It doesn't do this by adding to the economy, far from it, it is still quite a drain on the economy. However, without a stable government and a strong military to protect against outside forces, the economy would not be able to exist in any stable way. Look at countries like Haiti that are in constant uprising to see what I mean. Earthquake notwithstanding, their economy could never gain a foothold because the government and military are unstable.

    The majority of the rest of what the government does, however, just drains our economy and adds little to nothing of benefit (or at least the gain is far overshadowed by the cost).

He keeps differentiating, flying off on a tangent.

Working...