Recovering Data From Noise 206
An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy, incomplete inputs. "[The inventor of CS, Emmanuel] Candès can envision a long list of applications based on what he and his colleagues have accomplished. He sees, for example, a future in which the technique is used in more than MRI machines. Digital cameras, he explains, gather huge amounts of information and then compress the images. But compression, at least if CS is available, is a gigantic waste. If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ... The ability to gather meaningful data from tiny samples of information is also enticing to the military."
CSI (Score:5, Funny)
Re: (Score:3)
Enhance!
Re: (Score:2)
Zoom in!
Re: (Score:2)
Seriously, watching a CS reconstruction is actually visually more impressive than what they do on CS. I coded up a demo and everyone calls it the magic algorithm.
Re: (Score:3, Interesting)
Your AC wish is my command [robbtech.com].
Re: (Score:2)
Nice images.
Cheers.
Re: (Score:2)
Thanks. I think that was back with an older version of IE and it would mess up the floating divs a bit. Nothing horrible. Good to hear they've fixed that.
Re: (Score:2)
[geek mode]
It actually reminds me more of that ST:TNG episode with Yuta. They're able to take a picture with someone's face half-blocked out by scenery and other people. They're able to reconstruct the rest of the face based on the patterns that are there.
Overview of Algorithm (Score:4, Funny)
Here's how Compressed Sensing works with standard JPGs.
First the program takes the target JPG (which you want to be very large), and treats it as random noise. Simply a field of random zeros and ones. Then, within that vast field, the program selects a pattern or frequency to look for variations in the noise pattern.
The variations in the noise pattern act as a beacon - sort of a signal that the payload is coming. Common variations include mathematical pulses at predictable intervals - say something that would easily be recognizable by a 5th-grader, like say a pattern of prime numbers.
Then it searches for a second layer, nested within the main signal. Some bits are bits to tell how to interpret the other bits. Use a gray scale with standard interpolation. Rotate the second layer 90 degrees. Make sure there's a string break every 60 characters, and search for an auxiliary sideband channel. Make sure that the second layer is zoomed out sufficiently, and using a less popular protocol language; otherwise it won't be easily recognizable upon first glance.
Here's the magical part: It then finds a third layer. Sort of like in ancient times when parchment was in short supply people would write over old writing... it was called a palimpsest. Here you can uncompress over 10,000 "frames" of data, which can enhance a simple noise pattern to be a recognizable political figure.
Further details on this method can be found here. [imsdb.com]
--
Recycle when possible!
CuteOverload (Score:2)
Why not... (Score:5, Insightful)
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.
Re:Why not... (Score:5, Insightful)
Re:Why not... (Score:5, Interesting)
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.
If you read the article, the rest of that quote makes a lot more sense. Here it is in context:
If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? For digital snapshots of your kids, battery waste may not matter much; you just plug in and recharge. “But when the battery is orbiting Jupiter,” Candès says, “it’s a different story.” Ditto if you want your camera to snap a photo with a trillion pixels instead of a few million.
So, while this strategy might not be implemented in my Canon Powershot anytime soon, it sounds like a really great idea for exploration or just limited resources in general. I was thinking more along the lines of making really crappy resolution low power cameras that are very cheap but distributing them with this software that takes the images on your computer and processes them to make them highly defined images.
Re: (Score:2)
so in other words, real life "zoom in and enhance"?
or could it get as far as a esper like system?
Re: (Score:3, Interesting)
Kind-of.
This technique is taking the noisy or incomplete data, and inferring the details already captured but only on a few pixels. So, if there's a line or square on the image but you only catch a few pixels on it, this technique can infer the shape from those few pixels. So, it will enhance the detail on forms you can almost see, but not create the detail from scratch.
Rather than 'enhancing' the image, a better term would be 'upsampling'. The example used in the article was of a musical performance.
Re: (Score:2)
Ok. The gross simplification makes this sound like pixel homeopathy. Or the Total Perspective Vortex. "We can reliably infer almost anything from almost nothing" lies down that road.
I remain unconvinced.
Re: (Score:2)
Absolutely nowhere do they claim they can pull details that don't exist out of nothing. This is simply a better version of interpolation. Currently, when we're missing data we usually just look at the adjacent pixels to determine what should go in between. This algorithm looks for the patterns (particularly blocks) in the pixels for what should go in-between (see here [wikipedia.org] for examples).
The assumption is that for most pictures (or other datasets of interest) your data is not random, it has some form of patte
Re:Why not... (Score:5, Interesting)
Re:Why not... (Score:4, Interesting)
In fact, it's expected to be used to increase the aperture of cameras. The advantage of this, is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture, therefore, you would be able to re-focus the image after it was taken. In regular photography that kernel is normally Gaussian and very hard to de-blur. But using certain patterns when taking the picture (probably implemented as micro-mirrors), you could, easily do this in post processing.
You people think in such limited terms. The military uses rapid frequency shifting and spread spectrum communications to avoid jamming. Such technology could be used to more rapidly identify the keys and encoding of such transmissions, as well as decreasing the amount of energy required to create an effective jamming signal by several orders of magnitude across the spectrum used if any pattern could be identified. Currently, massive antenna arrays are required to provide the resolution necessary to conduct such an attack. This makes the jamming equipment more mobile, and more effective at the same time. A successful attack on that vector could effectively kill most low-power communications capabilities of a mobile force, or at least increase the error rate (hello Shannon's Law) to the point where the signal becomes unusable. The Air Force is particularily dependent on realtime communications that rely on low-power signal sources.
If nothing else, getting a signal lock would at least tell you what's in the air. Stealth be damned -- you get a signal lock on the comms, which are on most of the time these days, and you don't need radar. Just shoot in the general direction of Signal X and *bang*. Anything that reduces the noise floor generates a greater exposure area for these classes of sigint attacks. Cryptologists need not apply.
Re: (Score:2)
You people think in such limited terms.
I talk about what I know and I work on. I am not in the military, and couldn't care less about such kind of applications. Of course there are tons of applications, including several of dimensionality reduction for faster intrusion detection mechanisms, but I find photography more appealing.
Re: (Score:2)
You people think in such limited terms.
Thinking in commercial terms is hardly limited. Thinking in terms of the deadweight loss industry is vastly more limiting, in every respect.
I really don't understand why people get so excited about the deadweight loss industry. Anyone who understands anything about economics knows how utterly irrational it is. I guess the world will always be full of emotionally-driven, unstable, irrational people who think that deadweight loss spending is a good idea. Fortunately some of us are more rational than that,
Re: (Score:2, Insightful)
I just want to point out that everything tied to the government is dead weight. The military is one of the only truly necessary endeavors the government pursues that actually helps the economy. It doesn't do this by adding to the economy, far from it, it is still quite a drain on the economy. However, without a stable government and a strong military to protect against outside forces, the economy would not be able to exist in any stable way. Look at countries like Haiti that are in constant uprising to
Re: (Score:2)
This technique is not about detection but about "filling in the blanks" for signals that are highly ordered but for which you have limited samples.
Encrypted military communications are not "sparse" as they have very high entropy. Said another way... it is too random for any "filling in the blanks" - so this technique doesn't work well for them - spread spectrum or otherwise. There is a big difference between reconstructing f(t)=t^2 + 4t + 7 from two samples (always perfect) and rand(t) which never works.
Re: (Score:2)
I might have misunderstood you but I don't think you can properly compare what you're talking about to changing the aperture of a camera and if you could it would be decreasing the aperture (more things in focus), not increasing it. I think you're also talking about other techniques, such as acquiring the whole lightfield, that might well be made more practical by CS but aren't really the same thing.
Re: (Score:2)
Re: (Score:3, Interesting)
Truthfully, I was thinking along the lines of taking a high resolution camera and making it better, rather than taking a low resolution camera and making it high. My aging Nikon is a 7.1 megapixel, with only a 3x optical zoom. There have been times I wanted to take a picture of something quick, so do not necessaraly have time to zoom or move closer to the object. After cropping, I may end up with a 1-2 megapixel image (sometimes much lower). For the longest, I thought I just needed more megapixels, and a fa
Image stacking (Score:4, Informative)
After cropping, I may end up with a 1-2 megapixel image (sometimes much lower)
Try image stacking. A program I've used successfully for this is PhotoAcute. Provided your body+lens combo is in their database, you can stack multiple near-identical images (use Burst or Auto-bracket mode) and get "super resolution". Of course, this doesn't work so well if your subject is moving. If your body+lens combo isn't in their database, you can volunteer a couple hours of your time to make a set of ~ 100 specific images they can use to create a profile for your gear. If they accept it, they'll offer you a free license for the software. I have no connection with the company other than being a satisfied customer.
--
.nosig
Re: (Score:2)
What you really need is a better (bigger, heavier) lens. In most cameras post-megapixel race the maximum angular resolution is usually limited by the lens, not the sensor resolution. CS and/or sensor upgrades can't correct for that because the information doesn't actually make it through the glass to be recorded.
If you just want to make those pictures look better, you can probably get some good results with some of Photoshop's edge enhancing and sharpening filters. CS also makes a wicked noise filter (no
Re: (Score:2)
And in fact, were that camera orbiting Jupiter, it would only have to send the 10% data back to Earth where the reconstruction could take place. It turns into "real-time" compression.
Re: (Score:3, Interesting)
RTFA that's the point of the algorithm: the camera sensors don't need to calculate what is interesting about the picture, they just need to sample a randomly distributed set of pixels. The algorithm calculates the highres image from that sample.
The idea behind the algorithm is really very elegant. To parafrase their approach: imagine a 1000x1000 pixel image with 24 bit color. There are 24 ^ 1000000 unique pixel configurations to fill that image. The vast majority of those configuration will look like noise.
Re: (Score:3, Interesting)
Actually, you don't process and throw away information. You are not Sensing and then Compressing, you are Compressed Sensing, so you take in less data in the first place.
A canonical example is a 1-pixel camera that uses a grid of micro-mirrors, each of which can be set to reflect onto the pixel or not. By setting the grid randomly, you are essentially doing a Random Projection of the data before it's recorded, so you are Compressed Sensing. With a sufficient number of these 1-pixel images, each with a diffe
Re: (Score:3, Interesting)
Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1). [See Redwood Center for Theoretical Neuroscience, https://redwood.berkeley.edu/%5D [berkeley.edu]. The lingo used is a bit different than the CS community, but the math is essentially the same. The point being that compressed sensing coul
Re:Why not... (Score:4, Informative)
(a) JPEG doesn't know either
JPEG is built on the assumption that the higher frequency components are less important, so it spends less bits on representing those components than it does on the lower frequency ones.
It's a pretty crude model (not least because of the block based architecture that makes it simple to implement but introduces artifacts at block boundries) but it still does a lot better than just throwing away pixels and/or reducing the bits per pixel in the original image.
Wouldn't it be easier... (Score:2)
You Pr0n addicts (Score:2)
Come again? (Score:2)
If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..
That's what a digital camera is about, isn't it?
Re: (Score:2)
Perhaps if you're using some low-end digital camera but not if your camera allows you to save images in RAW format. Sort of like it was in the days you might have spent in the darkroom: if it ain't on the negative you're not going to get it back in the
I am a bit worried about the "fill in the shapes" (Score:4, Insightful)
The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what’s called sparsity, a measure of image simplicity.
The thing is in a medical image couldn't that actually remove a small growth or lesion? I know the article says:
That image isn’t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Candès and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.
but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.
on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.
Re:I am a bit worried about the "fill in the shape (Score:5, Insightful)
Typical science fraud (Score:3, Interesting)
Fraud alert: The title, "Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples" should have been "A better smoothing algorithm".
Re:Typical science fraud (Score:4, Interesting)
That's kinda' easy, isn't it? (Score:2)
but given a photo of Barack Obama's face with half of it blacked out, you can estimate with great accuracy what was in the other half.
It's rather easy to guess what's in the half that isn't blacked out, yeah? ;-)
Not smoothing (Score:5, Insightful)
The article was a bit poor. The data sets aren't really incomplete in most cases. They only seem that way from a traditional standpoint. The missing samples often contain absolutely no information, in which case the original image/signal can be reconstructed perfectly. In brief, nyquist is a rule about sampling non-sparse data, so if you rotate your sparse data into a basis in which it is non-sparse, and you satisfy the nyquist rule in that basis (though not in the original one), you are still fine.
I like this link better l1 magic [caltech.edu]
Re: (Score:2)
Re: (Score:2)
The article was a bit poor.
The article was dreadful. The link you provide actually makes sense. Thanks.
Definition of fraud (Score:2)
Definition of fraud: A deliberate deception used to get an unfair result.
The editor wanted to get more attention for the article than the article deserved.
"This should have been obvious the second you saw the word "Wired" anyway."
If Wired is routinely fraudulent, that does not diminish the fact that tricking people to get attention is fraud.
The article is of interest only to mathematicians and those interested in smoothing dat
Fill in the blanks (Score:2)
It started off with pixels missing; when done the pixels are filled. How is that not creating absent data by inferring it?
Any algorithm that generates more data than was sent in is inferring. That's not to say it isn't useful, but if, for example, all of the pixels of the bile duct blockage (FTFA) were missing, the picture would have to have been reconstituted with no blockage. If the only three pixels in an area were discolored, then that whole area (or some significant portion of it) would be discolore
Re: (Score:2)
1 bit in, 10 bits out does not mean that you have created 9 bits of correct data. Look at Obama's teeth in the example. The algorithm understands it is better to put white pixels inst
Re: (Score:2)
I completely misread your response before your reply. We're arguing the same position :-)
Although I disagree regarding inference - it is inferring the absent data (my my definition of inference), and in some cases that will be useful. However, I suspect if used for medical images it would give confidence to a wrong answer more often than it would give enough information to get the right answer.
Re: (Score:2)
That's what I get for assuming none of the readers were too stupid to get the point.
I have written huffman compression algorithms that were (and still are, for all I know) used in production systems. I recently wrote a Rabin compression system [bobbymartin.name]. I know how compression works.
Data is not measured in bytes. Files are measured in bytes.
Lossless compression does not create more data than was in the original dataset, any more than a program that writes out an infinite series of 1s contains an infinite amount of
Re:I am a bit worried about the "fill in the shape (Score:3, Insightful)
The description of the algorithm in the article is quite poor. To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real. You often also require that the total variation of the image should be as low as possible as well.
Since the image is
Re: (Score:2)
Perhaps we want cameras that produce Fourier coefficients instead of images?
Re: (Score:2)
Some of the designs for CS cameras basically do just that. You can do CS just as well with images acquired in the image domain though, the intuitive reasoning for why it works just gets a little... less intuitive.
I'm not sure CS is going to quickly catch on in your common camera because it doesn't really solve a pressing problem but it will certainly find lots of applications.
Re: (Score:2)
While I'm certainly no expect on this, it seems almost everyone here is being mislead by the word "noise". From what I gather, this is not cleaning up noise, it is filling in missing pieces in data whose samples are assumed to be noise-free. This is drastically different from "smoothing" that is intended to filter out noise.
So, in the case of a small growth or lesion, as long as there is at least one sample of it
Re: (Score:2)
Military applications (Score:4, Interesting)
forgot to mention... it works both ways (Score:2)
Re: (Score:2)
Everyone is bound by the laws of universe. Just because one country has a particular technology, doesn't prevent another country from independantly developing it (remember how China blasted that satellite to smitherines?).
If we don't want our enemies (or "frenemies") to be able to independantly develop military capabilties,
Demo image (Score:4, Insightful)
I seriously doubt that the Obama demo image is real. There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.
Re:Demo image (Score:5, Informative)
indeed. check the caption :
"Photos: Obama: Corbis; Image Simulation: Jarvis Haupt/Robert Nowak" (emphasis added by me)
Re: (Score:3, Informative)
"Image Simulation" likely means that they simulated the acquisition. The recovery of the "after" image from the "before" image is probably as shown, it's just that the "before" image was not acquired from an actual camera. Those results don't look particularly amazing for compressed sensing. See this for example [robbtech.com].
Re: (Score:2)
Re: (Score:2)
Yes, that's the idea. D is the original, E is the undersampled and F is the CS reconstructed image. F is visually identical to D, meaning the reconstruction worked very well.
Incidentally, that's not really low resolution. A typical MR image is about 256x256. I think I made that image 1024 pixels across and there are three images across with a bit of space between, so the individual images are pretty close to actual size.
Re: (Score:2)
Re: (Score:2)
For real images created using compressed sensing, check out Rice's one-pixel camera [rice.edu].
I could do this in PhotoShop. (Score:4, Funny)
After applying the Noise filter to mess up my image I hit Undo and my image is back to normal.
Holy Bad Acronym Batman (Score:4, Insightful)
Re: (Score:2)
Aft first I thought he was referring to Credit Suisse. Then I thought no, this is an article about Counter Strike. Then perhaps I thought it meant CS gas. Then perhaps, having been betrayed by an uncooperative context, I thought like you it meant Computer Science. But no - lo and behold "CS" stands for "Compressed Sensing", a new algorithm called "CS" by 1) those working on it and 2) those who have absolutely no idea what it is or how it works, but want to sound cool anyway because hey, what's cooler than u
Re: (Score:2)
As long as the acronym is explicitly defined, it doesn't matter how obscure it is. That's proper writing style.
That was the beginning of compressed sensing, or CS
And there it is in the article, what are you complaining about again? Oh right, TFA and slashdot editors. Carry on, then.
Re: (Score:2)
And there it is in the article, what are you complaining about again? Oh right, TFA and slashdot editors. Carry on, then.
Precisely. Because while it was defined in the article, it was not defined in the summary. The summary jumped immediately from the name of the algorithm to using the shorthand, without ever saying that the shorthand would be used in place of the full name. And being as there are other uses of the CS acronym - especially in the slashdot community - the slashdot editors failed miserably by not stating that they were going to reuse a commonly used acronym.
Re: (Score:2)
Yes, but a quick application of the Compressed Sensing Algorithm to the lettters CS will shortly reveal that it stands for Compressed Sensing.
If it stood for Computer Science instead, the algorithm would have been able to sense that, in a compressed sort of way.
Re: (Score:2)
I'd like you all to know I'm feeling very compressed.
- Marvin.
Re: (Score:2)
CS - The inventor of Counter-Strike!!!
Wrong. (Score:2)
These are fancy words, for what is nothing else that automated educated guessing. (And re-vectorization.)
Yes, you can guess that a round shape is round, even when a couple of pixels are missing. But you can not guess that one of these missing pixels actually was a dent. So this mechanism here would still make that dent vanish. Just in a less-obvious way. (Which can be very bad, if that dent was critical.)
Essentially if you have a lossy process, you are always going to have a lack of details, and that’
Re: (Score:2)
You've missed the point, which is not surprising considering the way the article is written.
Compressed sensing exploits the observation that almost every useful image is actually sparse - it contains much less information than the pixels that make it up can store. Furthermore, if you undersample that image in the right way, the original data is recoverable.
For a reasonable level of undersampling (and a sparse image) CS will give you a perfect reconstruction, just like gzip, for example. The important diff
This is an important tool! (Score:2)
I can finally stop reading the articles and the summaries, and apply this algorithm to the first post to understand the article instead. What a time saver!
You can't create something from nothing - can you? (Score:2)
As soon as I read the article, it seemed fishy to me. How can you create data where it doesn't already exist? If you take a scan of a patient, a tumour will either show up or not show up in the data. If it shows up, there's no need for enhancement. If it doesn't show up, no amount of enhancement can cause it to do so.
Then I came across this blog post [wordpress.com] by Terence Tao, one of the researchers mentioned in the Wired article.
It has some very interesting explanations of how this is supposed to work. I'm still not
Re:You can't create something from nothing - can y (Score:2)
The key is that the image must be sparse (and almost all useful images are sparse). By definition, a sparse image contains less information than the pixels that make it up can store. Thus, it is compressible. So you're not creating data where it doesn't exist, you're just not sampling and storing the redundant parts.
It's no more magic than gzip or jpeg compression.
Re: (Score:2)
Sorry to be dense, but I don't understand where compression comes into this. You're not compressing anything, you're somehow discovering data that wasn't sampled in the first place. I don't see the relationship between the two concepts.
Can you explain what I'm missing, in terms of my original example? If there's a dark spot on the image indicating a potential tumour, then that information is there in your data, and no clever processing is necessary. If the dark spot is not there, no amount of processing wil
Re: (Score:2)
To put it in Javascript...
for(rows=1; rows < maxrows; rows++){
for(cols=1; cols < maxcols; cols++){
if(Math.rand() < 0.2){ StorePixel(cols,rows) }
}
}
And if taking an image of something that typically appears in the natural world, you will come out with a picture that is "not wrong." That means
Re: (Score:2)
It doesn't sample a 200x200 square and give you a 1024x768 image, it samples random pixels from the range you are looking to come out with in the end.
Can you explain how picking a pixel at random is better than sampling every 4th pixel? Surely the randomness just increases the chance that you'll miss some essential feature in the image?
Say the size of a potential tumour in the image is 5 pixels wide. Sampling every 4 pixels would guarantee you catch the tumour (the number of pixels to sample is chosen on the basis of the smallest size tumour which it is necessary to catch). Sampling an identical number of random pixels, on the other hand, would mean ther
Re: (Score:2)
When you compress something you represent it in such a way that you can reconstruct the original based on less data. Effectively you're "discovering" data that wasn't sampled (stored) in the first place. Except, with lossless compression at least, you're not really doing this. The compression process discards only redundant data.
Compressed sensing works in much the same way except that you effectively treat your acquisition and display process as you would your reading-from-disk-and-decompressing proces
Quantum state tomography (Score:2)
Relevant information: I'm a physicist, and my research group is actively researching quantum state tomography via compressed sensing.
This technique is quite useful also in quantum state tomography. Consider a qubyte. We represent it by an 2^8 x 2^8 matrix of complex numbers. Now we want to measure it. We have to make 2^16 measurements (keep in mind that a quantum measurement is a nontrivial task), and use this data to reconstruct the original matrix, which again is a very intensive task, if done right (ther
Meh (Score:2)
Controversy... (Score:2)
[The inventor of CS, Emmanuel] Candès...
The way I understand it, there is actually a bit of controversy over whether Candès or David Donoho [stanford.edu] "invented" compressed sensing. It seems to me that Donoho was actually first, but Candès ended up getting most of the credit.
Yea.. Nothing New (Score:2)
Previous art (Score:2)
Is risky to "fill in the blanks" or give your own (i.e. following a set of rules) meaning to noise, it will show things as you think they should be, and the exceptions will be missed or discarded.
Single pixel cameras (Score:2)
Compressed sensing is the same mathematics behind the Rice single pixel camera [slashdot.org] covered on Slashdot a few years ago.
Quick! (Score:2)
/jk
Re: (Score:2)
Re: (Score:2)
No. For typically levels of undersampling CS reconstructs the image perfectly. Yes, it's not exactly intuitive, but it does work.
Re: (Score:2)
You're missing the key feature: the image is sparse. That means it contains redundant information. What CS does is sample enough to acquire all the necessary information, but avoids sampling some of the redundant bits, so there's no need to make up information.
The description of the algorithm in the article is terrible - don't base your opinion on it. Check out the rest of the comments - some people (including me) have posted better descriptions.
Re: (Score:2)
Thus, "compressing" signals requires of a knowledge of the sparsity of the signal acquired, that helps to design those "random" bases. Using those random bases to acquire the signal ensure that it will be recoverable.
Re: (Score:3, Insightful)
Does this only apply to image data, or will we be able to use this to clean up other databases? Will it work with sampled sounds? Names and addresses and inventory?
Of course not. It's not magic. There are certain assumptions that can be made about most real-life images, mainly that they have small total variance. That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps (like a cartoon image would have).
Though this method uses the l_1 norm and not total variation.
More importantly, HOW does it work?
See here [arxiv.org].
Re: (Score:2)
The L1 norm is generally computed on the wavelet transform of the image, not the image itself. Total variation is usually minimized in tandem because it tends to produce better reconstructions.
Re: (Score:2)
More importantly, HOW does it work?
Sorry of TFA answers these questions, but I've never known Wired to get into any kind of detail on stuff like this.
From TFA:
The key to finding the single correct representation is a notion called sparsity, a mathematical way of describing an image’s complexity, or lack thereof. A picture made up of a few simple, understandable elements — like solid blocks of color or wiggly lines — is sparse; a screenful of random, chaotic dots is not. It turns out that out of all the bazillion possible reconstructions, the simplest, or sparsest, image is almost always the right one or very close to it.
So any dataset that is likely to be smooth can be improved with this technique. They give the example in TFA of piano music (except for percussion, the frequencies present are consistent for a significant period of time). Names, addresses, and inventory are for all intents and purposes here random. You can't determine the address of someone in a database by looking at the adjacent entries.
Re: (Score:2)
> But as others pointed out, it might be less efficient to do the compression
> in the sensor than the way it's being done today.
Compression is done in the camera today. The proposal is to have the camera simply throw away a random subset of the pixels instead of compressing and then use this algorithm later on a computer to "restore" the image.
Re: (Score:2)
Re: (Score:2)
TFA does a terrible job of explaining the technique. No interpolation is involved. You should read this fine article by Terence Tao: http://terrytao.wordpress.com/2007/04/13/compressed-sensing-and-single-pixel-cameras/ [wordpress.com]
It explains quite well the heart of the technique. I will, nevertheless, try to explain it quickly. I assume you are familiar with the jpeg compression algorithm. It throws away way more than 5% of the data, and still gives you a nice picture. How? It converts the picture to the wavelet basis,
Re: (Score:2)
the face of yaweh. Careful...