Recovering Data From Noise 206
An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy, incomplete inputs. "[The inventor of CS, Emmanuel] Candès can envision a long list of applications based on what he and his colleagues have accomplished. He sees, for example, a future in which the technique is used in more than MRI machines. Digital cameras, he explains, gather huge amounts of information and then compress the images. But compression, at least if CS is available, is a gigantic waste. If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ... The ability to gather meaningful data from tiny samples of information is also enticing to the military."
Why not... (Score:5, Insightful)
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.
I am a bit worried about the "fill in the shapes" (Score:4, Insightful)
The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what’s called sparsity, a measure of image simplicity.
The thing is in a medical image couldn't that actually remove a small growth or lesion? I know the article says:
That image isn’t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Candès and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.
but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.
on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.
Re:Why not... (Score:5, Insightful)
Demo image (Score:4, Insightful)
I seriously doubt that the Obama demo image is real. There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.
Re:I am a bit worried about the "fill in the shape (Score:5, Insightful)
Holy Bad Acronym Batman (Score:4, Insightful)
Re:Questions... (Score:3, Insightful)
Does this only apply to image data, or will we be able to use this to clean up other databases? Will it work with sampled sounds? Names and addresses and inventory?
Of course not. It's not magic. There are certain assumptions that can be made about most real-life images, mainly that they have small total variance. That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps (like a cartoon image would have).
Though this method uses the l_1 norm and not total variation.
More importantly, HOW does it work?
See here [arxiv.org].
Re:I am a bit worried about the "fill in the shape (Score:3, Insightful)
The description of the algorithm in the article is quite poor. To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real. You often also require that the total variation of the image should be as low as possible as well.
Since the image is acquired in the Fourier domain, every measurement you make contains information about all the pixels in the image. For reasonable* under acquisitions CS can produce a perfectly reconstructed image.
* the exact limits of "reasonable" are still under investigation, but typically you only need to acquire about a quarter of the data to be pretty much guaranteed you'll be able to get a perfect reconstruction.
Re:Demo image (Score:1, Insightful)
It absolutely could be, just read the article: "Eventually it creates an image that will almost certainly be a near-perfect facsimile of a hi-res one."!
Not smoothing (Score:5, Insightful)
The article was a bit poor. The data sets aren't really incomplete in most cases. They only seem that way from a traditional standpoint. The missing samples often contain absolutely no information, in which case the original image/signal can be reconstructed perfectly. In brief, nyquist is a rule about sampling non-sparse data, so if you rotate your sparse data into a basis in which it is non-sparse, and you satisfy the nyquist rule in that basis (though not in the original one), you are still fine.
I like this link better l1 magic [caltech.edu]
Re:Why not... (Score:1, Insightful)
what's wrong with killing people? There are too many here anyway, you and me included...
Re:Why not... (Score:2, Insightful)
I just want to point out that everything tied to the government is dead weight. The military is one of the only truly necessary endeavors the government pursues that actually helps the economy. It doesn't do this by adding to the economy, far from it, it is still quite a drain on the economy. However, without a stable government and a strong military to protect against outside forces, the economy would not be able to exist in any stable way. Look at countries like Haiti that are in constant uprising to see what I mean. Earthquake notwithstanding, their economy could never gain a foothold because the government and military are unstable.
The majority of the rest of what the government does, however, just drains our economy and adds little to nothing of benefit (or at least the gain is far overshadowed by the cost).