Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Science

Recovering Data From Noise 206

An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy, incomplete inputs. "[The inventor of CS, Emmanuel] Candès can envision a long list of applications based on what he and his colleagues have accomplished. He sees, for example, a future in which the technique is used in more than MRI machines. Digital cameras, he explains, gather huge amounts of information and then compress the images. But compression, at least if CS is available, is a gigantic waste. If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ... The ability to gather meaningful data from tiny samples of information is also enticing to the military."
This discussion has been archived. No new comments can be posted.

Recovering Data From Noise

Comments Filter:
  • CSI (Score:5, Funny)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday March 02, 2010 @08:15AM (#31328740) Journal
    Enhance!
    • Enhance!

    • by ceoyoyo ( 59147 )

      Seriously, watching a CS reconstruction is actually visually more impressive than what they do on CS. I coded up a demo and everyone calls it the magic algorithm.

    • [geek mode]

      It actually reminds me more of that ST:TNG episode with Yuta. They're able to take a picture with someone's face half-blocked out by scenery and other people. They're able to reconstruct the rest of the face based on the patterns that are there.

    • by Chapter80 ( 926879 ) on Tuesday March 02, 2010 @09:27AM (#31329482)

      Here's how Compressed Sensing works with standard JPGs.

      First the program takes the target JPG (which you want to be very large), and treats it as random noise. Simply a field of random zeros and ones. Then, within that vast field, the program selects a pattern or frequency to look for variations in the noise pattern.

      The variations in the noise pattern act as a beacon - sort of a signal that the payload is coming. Common variations include mathematical pulses at predictable intervals - say something that would easily be recognizable by a 5th-grader, like say a pattern of prime numbers.

      Then it searches for a second layer, nested within the main signal. Some bits are bits to tell how to interpret the other bits. Use a gray scale with standard interpolation. Rotate the second layer 90 degrees. Make sure there's a string break every 60 characters, and search for an auxiliary sideband channel. Make sure that the second layer is zoomed out sufficiently, and using a less popular protocol language; otherwise it won't be easily recognizable upon first glance.

      Here's the magical part: It then finds a third layer. Sort of like in ancient times when parchment was in short supply people would write over old writing... it was called a palimpsest. Here you can uncompress over 10,000 "frames" of data, which can enhance a simple noise pattern to be a recognizable political figure.

      Further details on this method can be found here. [imsdb.com]

      --
      Recycle when possible!

    • Super Redonkulous Fluffhance!
  • Why not... (Score:5, Insightful)

    by jbb999 ( 758019 ) on Tuesday March 02, 2010 @08:19AM (#31328778)

    If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..

    Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.

    • Re:Why not... (Score:5, Insightful)

      by Chrisq ( 894406 ) on Tuesday March 02, 2010 @08:28AM (#31328856)
      I think you are missing the point, throwing away 90% of the image was a demonstration of the capabilities of this algorithm. You would use it where you have only managed to capture a small amount of data, not capture the lot and throw away 90%.
    • Re:Why not... (Score:5, Interesting)

      by eldavojohn ( 898314 ) * <eldavojohn@gma[ ]com ['il.' in gap]> on Tuesday March 02, 2010 @08:31AM (#31328888) Journal

      If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..

      Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.

      If you read the article, the rest of that quote makes a lot more sense. Here it is in context:

      If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? For digital snapshots of your kids, battery waste may not matter much; you just plug in and recharge. “But when the battery is orbiting Jupiter,” Candès says, “it’s a different story.” Ditto if you want your camera to snap a photo with a trillion pixels instead of a few million.

      So, while this strategy might not be implemented in my Canon Powershot anytime soon, it sounds like a really great idea for exploration or just limited resources in general. I was thinking more along the lines of making really crappy resolution low power cameras that are very cheap but distributing them with this software that takes the images on your computer and processes them to make them highly defined images.

      • by hitmark ( 640295 )

        so in other words, real life "zoom in and enhance"?

        or could it get as far as a esper like system?

        • Re: (Score:3, Interesting)

          by Bakkster ( 1529253 )

          Kind-of.

          This technique is taking the noisy or incomplete data, and inferring the details already captured but only on a few pixels. So, if there's a line or square on the image but you only catch a few pixels on it, this technique can infer the shape from those few pixels. So, it will enhance the detail on forms you can almost see, but not create the detail from scratch.

          Rather than 'enhancing' the image, a better term would be 'upsampling'. The example used in the article was of a musical performance.

          • Ok. The gross simplification makes this sound like pixel homeopathy. Or the Total Perspective Vortex. "We can reliably infer almost anything from almost nothing" lies down that road.

            I remain unconvinced.

            • Absolutely nowhere do they claim they can pull details that don't exist out of nothing. This is simply a better version of interpolation. Currently, when we're missing data we usually just look at the adjacent pixels to determine what should go in between. This algorithm looks for the patterns (particularly blocks) in the pixels for what should go in-between (see here [wikipedia.org] for examples).

              The assumption is that for most pictures (or other datasets of interest) your data is not random, it has some form of patte

      • Re:Why not... (Score:5, Interesting)

        by Idbar ( 1034346 ) on Tuesday March 02, 2010 @08:42AM (#31328992)
        In fact, it's expected to be used to increase the aperture of cameras. The advantage of this, is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture, therefore, you would be able to re-focus the image after it was taken. In regular photography that kernel is normally Gaussian and very hard to de-blur. But using certain patterns when taking the picture (probably implemented as micro-mirrors), you could, easily do this in post processing.
        • Re:Why not... (Score:4, Interesting)

          by girlintraining ( 1395911 ) on Tuesday March 02, 2010 @09:45AM (#31329714)

          In fact, it's expected to be used to increase the aperture of cameras. The advantage of this, is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture, therefore, you would be able to re-focus the image after it was taken. In regular photography that kernel is normally Gaussian and very hard to de-blur. But using certain patterns when taking the picture (probably implemented as micro-mirrors), you could, easily do this in post processing.

          You people think in such limited terms. The military uses rapid frequency shifting and spread spectrum communications to avoid jamming. Such technology could be used to more rapidly identify the keys and encoding of such transmissions, as well as decreasing the amount of energy required to create an effective jamming signal by several orders of magnitude across the spectrum used if any pattern could be identified. Currently, massive antenna arrays are required to provide the resolution necessary to conduct such an attack. This makes the jamming equipment more mobile, and more effective at the same time. A successful attack on that vector could effectively kill most low-power communications capabilities of a mobile force, or at least increase the error rate (hello Shannon's Law) to the point where the signal becomes unusable. The Air Force is particularily dependent on realtime communications that rely on low-power signal sources.

          If nothing else, getting a signal lock would at least tell you what's in the air. Stealth be damned -- you get a signal lock on the comms, which are on most of the time these days, and you don't need radar. Just shoot in the general direction of Signal X and *bang*. Anything that reduces the noise floor generates a greater exposure area for these classes of sigint attacks. Cryptologists need not apply.

          • by Idbar ( 1034346 )

            You people think in such limited terms.

            I talk about what I know and I work on. I am not in the military, and couldn't care less about such kind of applications. Of course there are tons of applications, including several of dimensionality reduction for faster intrusion detection mechanisms, but I find photography more appealing.

          • by radtea ( 464814 )

            You people think in such limited terms.

            Thinking in commercial terms is hardly limited. Thinking in terms of the deadweight loss industry is vastly more limiting, in every respect.

            I really don't understand why people get so excited about the deadweight loss industry. Anyone who understands anything about economics knows how utterly irrational it is. I guess the world will always be full of emotionally-driven, unstable, irrational people who think that deadweight loss spending is a good idea. Fortunately some of us are more rational than that,

            • Re: (Score:2, Insightful)

              by Bigjeff5 ( 1143585 )

              I just want to point out that everything tied to the government is dead weight. The military is one of the only truly necessary endeavors the government pursues that actually helps the economy. It doesn't do this by adding to the economy, far from it, it is still quite a drain on the economy. However, without a stable government and a strong military to protect against outside forces, the economy would not be able to exist in any stable way. Look at countries like Haiti that are in constant uprising to

          • This technique is not about detection but about "filling in the blanks" for signals that are highly ordered but for which you have limited samples.

            Encrypted military communications are not "sparse" as they have very high entropy. Said another way... it is too random for any "filling in the blanks" - so this technique doesn't work well for them - spread spectrum or otherwise. There is a big difference between reconstructing f(t)=t^2 + 4t + 7 from two samples (always perfect) and rand(t) which never works.

        • by ceoyoyo ( 59147 )

          I might have misunderstood you but I don't think you can properly compare what you're talking about to changing the aperture of a camera and if you could it would be decreasing the aperture (more things in focus), not increasing it. I think you're also talking about other techniques, such as acquiring the whole lightfield, that might well be made more practical by CS but aren't really the same thing.

          • by Idbar ( 1034346 )
            My bad, I should have put modify the aperture (probably exposure better suits here) after the image is taken. You are right in the sense that it may be used to make everything in focus, but you could also use it for focusing one particular thing. Thanks for pointing that out.
      • Re: (Score:3, Interesting)

        by gravis777 ( 123605 )

        Truthfully, I was thinking along the lines of taking a high resolution camera and making it better, rather than taking a low resolution camera and making it high. My aging Nikon is a 7.1 megapixel, with only a 3x optical zoom. There have been times I wanted to take a picture of something quick, so do not necessaraly have time to zoom or move closer to the object. After cropping, I may end up with a 1-2 megapixel image (sometimes much lower). For the longest, I thought I just needed more megapixels, and a fa

        • Image stacking (Score:4, Informative)

          by sbjornda ( 199447 ) <sbjornda&hotmail,com> on Tuesday March 02, 2010 @11:05AM (#31330768)

          After cropping, I may end up with a 1-2 megapixel image (sometimes much lower)

          Try image stacking. A program I've used successfully for this is PhotoAcute. Provided your body+lens combo is in their database, you can stack multiple near-identical images (use Burst or Auto-bracket mode) and get "super resolution". Of course, this doesn't work so well if your subject is moving. If your body+lens combo isn't in their database, you can volunteer a couple hours of your time to make a set of ~ 100 specific images they can use to create a profile for your gear. If they accept it, they'll offer you a free license for the software. I have no connection with the company other than being a satisfied customer.

          --
          .nosig

        • by ceoyoyo ( 59147 )

          What you really need is a better (bigger, heavier) lens. In most cameras post-megapixel race the maximum angular resolution is usually limited by the lens, not the sensor resolution. CS and/or sensor upgrades can't correct for that because the information doesn't actually make it through the glass to be recorded.

          If you just want to make those pictures look better, you can probably get some good results with some of Photoshop's edge enhancing and sharpening filters. CS also makes a wicked noise filter (no

      • by SQLGuru ( 980662 )

        And in fact, were that camera orbiting Jupiter, it would only have to send the 10% data back to Earth where the reconstruction could take place. It turns into "real-time" compression.

    • Re: (Score:3, Interesting)

      by Matje ( 183300 )

      RTFA that's the point of the algorithm: the camera sensors don't need to calculate what is interesting about the picture, they just need to sample a randomly distributed set of pixels. The algorithm calculates the highres image from that sample.

      The idea behind the algorithm is really very elegant. To parafrase their approach: imagine a 1000x1000 pixel image with 24 bit color. There are 24 ^ 1000000 unique pixel configurations to fill that image. The vast majority of those configuration will look like noise.

    • Re: (Score:3, Interesting)

      by wfolta ( 603698 )

      Actually, you don't process and throw away information. You are not Sensing and then Compressing, you are Compressed Sensing, so you take in less data in the first place.

      A canonical example is a 1-pixel camera that uses a grid of micro-mirrors, each of which can be set to reflect onto the pixel or not. By setting the grid randomly, you are essentially doing a Random Projection of the data before it's recorded, so you are Compressed Sensing. With a sufficient number of these 1-pixel images, each with a diffe

    • Re: (Score:3, Interesting)

      by shabtai87 ( 1715592 )

      Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1). [See Redwood Center for Theoretical Neuroscience, https://redwood.berkeley.edu/%5D [berkeley.edu]. The lingo used is a bit different than the CS community, but the math is essentially the same. The point being that compressed sensing coul

  • to just subscribe to Cinemax instead of going through all this trouble to de-scramble the pr0n?
  • If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..

    That's what a digital camera is about, isn't it?

    • by rnturn ( 11092 )

      If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ..

      That's what a digital camera is about, isn't it?

      Perhaps if you're using some low-end digital camera but not if your camera allows you to save images in RAW format. Sort of like it was in the days you might have spent in the darkroom: if it ain't on the negative you're not going to get it back in the

  • by Chrisq ( 894406 ) on Tuesday March 02, 2010 @08:25AM (#31328826)
    From TFA

    The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what’s called sparsity, a measure of image simplicity.

    The thing is in a medical image couldn't that actually remove a small growth or lesion? I know the article says:

    That image isn’t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Candès and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.

    but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.

    on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.

    • by Yvanhoe ( 564877 ) on Tuesday March 02, 2010 @08:30AM (#31328872) Journal
      Exactly. This algorithm doesn't create absent data nor does it infer it, it just makes the uncertainties it has "nicer" than the usual smoothing.
      • MOD PARENT UP for this: "This algorithm doesn't create absent data nor does it infer it, it just makes the uncertainties it has "nicer" than the usual smoothing."

        Fraud alert: The title, "Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples" should have been "A better smoothing algorithm".
        • by timeOday ( 582209 ) on Tuesday March 02, 2010 @10:02AM (#31329936)
          No, not just "nicer." It fills in the data with what was most likely to have been there in the first place, given the prior probabilities on the data. The axiom of being unable to regain information that was lost or never captured is, as commonly applied, mostly wrong. The fact is, almost all of our data collection is on samples that we already know a LOT about what they look like. Does this let you recapture a license plate from a 4 pixel image, no, but given a photo of Barack Obama's face with half of it blacked out, you can estimate with great accuracy what was in the other half.
        • Not smoothing (Score:5, Insightful)

          by nten ( 709128 ) on Tuesday March 02, 2010 @10:08AM (#31329996)

          The article was a bit poor. The data sets aren't really incomplete in most cases. They only seem that way from a traditional standpoint. The missing samples often contain absolutely no information, in which case the original image/signal can be reconstructed perfectly. In brief, nyquist is a rule about sampling non-sparse data, so if you rotate your sparse data into a basis in which it is non-sparse, and you satisfy the nyquist rule in that basis (though not in the original one), you are still fine.

          I like this link better l1 magic [caltech.edu]

      • It started off with pixels missing; when done the pixels are filled. How is that not creating absent data by inferring it?

        Any algorithm that generates more data than was sent in is inferring. That's not to say it isn't useful, but if, for example, all of the pixels of the bile duct blockage (FTFA) were missing, the picture would have to have been reconstituted with no blockage. If the only three pixels in an area were discolored, then that whole area (or some significant portion of it) would be discolore

        • by Yvanhoe ( 564877 )
          The difference between inference and guessing is that in inference you use clues in the data you have in order to rebuild a data that is not measured. It is like using the movement in a video in order to infer the parameters of the lens used : the data is here, but you have to extract it from the other data it is mixed with.

          1 bit in, 10 bits out does not mean that you have created 9 bits of correct data. Look at Obama's teeth in the example. The algorithm understands it is better to put white pixels inst
          • by wurp ( 51446 )

            I completely misread your response before your reply. We're arguing the same position :-)

            Although I disagree regarding inference - it is inferring the absent data (my my definition of inference), and in some cases that will be useful. However, I suspect if used for medical images it would give confidence to a wrong answer more often than it would give enough information to get the right answer.

    • The description of the algorithm in the article is quite poor. To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real. You often also require that the total variation of the image should be as low as possible as well.

      Since the image is

      • Perhaps we want cameras that produce Fourier coefficients instead of images?

        • by ceoyoyo ( 59147 )

          Some of the designs for CS cameras basically do just that. You can do CS just as well with images acquired in the image domain though, the intuitive reasoning for why it works just gets a little... less intuitive.

          I'm not sure CS is going to quickly catch on in your common camera because it doesn't really solve a pressing problem but it will certainly find lots of applications.

    • The thing is in a medical image couldn't that actually remove a small growth or lesion?

      While I'm certainly no expect on this, it seems almost everyone here is being mislead by the word "noise". From what I gather, this is not cleaning up noise, it is filling in missing pieces in data whose samples are assumed to be noise-free. This is drastically different from "smoothing" that is intended to filter out noise.

      So, in the case of a small growth or lesion, as long as there is at least one sample of it

  • by rcb1974 ( 654474 ) on Tuesday March 02, 2010 @08:27AM (#31328842) Homepage
    The military probably wants the ability to send/receive without revealing the data or the location of its source to the enemy. For example, its nuclear subs need to surface in order to communicate, and they don't want the enemy to be able to use triangulation to pinpoint the location of the subs. So, they make the data they're transmitting appear as noise. That way if the enemy happens to be listening on that frequency, they don't detect anything.
    • If the enemy uses this same technology against us, then the military wants to be able to recover as much information as they can.
  • Demo image (Score:4, Insightful)

    by ChienAndalu ( 1293930 ) on Tuesday March 02, 2010 @08:29AM (#31328864)

    I seriously doubt that the Obama demo image is real. There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.

    • Re:Demo image (Score:5, Informative)

      by sammyF70 ( 1154563 ) on Tuesday March 02, 2010 @08:36AM (#31328928) Homepage Journal

      indeed. check the caption :
      "Photos: Obama: Corbis; Image Simulation: Jarvis Haupt/Robert Nowak" (emphasis added by me)

      • Re: (Score:3, Informative)

        by ceoyoyo ( 59147 )

        "Image Simulation" likely means that they simulated the acquisition. The recovery of the "after" image from the "before" image is probably as shown, it's just that the "before" image was not acquired from an actual camera. Those results don't look particularly amazing for compressed sensing. See this for example [robbtech.com].

        • hmm .. call me blind, maybe it's the low resolution, but I don't see much difference between D and F.
          • by ceoyoyo ( 59147 )

            Yes, that's the idea. D is the original, E is the undersampled and F is the CS reconstructed image. F is visually identical to D, meaning the reconstruction worked very well.

            Incidentally, that's not really low resolution. A typical MR image is about 256x256. I think I made that image 1024 pixels across and there are three images across with a bit of space between, so the individual images are pretty close to actual size.

            • ah. sorry. I misunderstood what I was seeing. I thought D was the scanned image, E was D being processed, and F the result. Then yes, you are indeed right, it is impressive!
    • by l00sr ( 266426 )

      For real images created using compressed sensing, check out Rice's one-pixel camera [rice.edu].

  • by jellomizer ( 103300 ) on Tuesday March 02, 2010 @08:56AM (#31329120)

    After applying the Noise filter to mess up my image I hit Undo and my image is back to normal.

  • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Tuesday March 02, 2010 @08:58AM (#31329140) Homepage Journal
    Did we really need to refer to it as CS in the summary? A quick glance of the summary could lead one to think that this guy is the inventor of Computer Science, rather than the correct Compressed Sensing... In the summary of an article that is concerned (in part) with maintaining information after compression, we lost quite a bit of information in abbreviating the name of his algorithm.
    • by Dunbal ( 464142 ) *

      Aft first I thought he was referring to Credit Suisse. Then I thought no, this is an article about Counter Strike. Then perhaps I thought it meant CS gas. Then perhaps, having been betrayed by an uncooperative context, I thought like you it meant Computer Science. But no - lo and behold "CS" stands for "Compressed Sensing", a new algorithm called "CS" by 1) those working on it and 2) those who have absolutely no idea what it is or how it works, but want to sound cool anyway because hey, what's cooler than u

      • As long as the acronym is explicitly defined, it doesn't matter how obscure it is. That's proper writing style.

        That was the beginning of compressed sensing, or CS

        And there it is in the article, what are you complaining about again? Oh right, TFA and slashdot editors. Carry on, then.

        • And there it is in the article, what are you complaining about again? Oh right, TFA and slashdot editors. Carry on, then.

          Precisely. Because while it was defined in the article, it was not defined in the summary. The summary jumped immediately from the name of the algorithm to using the shorthand, without ever saying that the shorthand would be used in place of the full name. And being as there are other uses of the CS acronym - especially in the slashdot community - the slashdot editors failed miserably by not stating that they were going to reuse a commonly used acronym.

    • by unitron ( 5733 )

      Yes, but a quick application of the Compressed Sensing Algorithm to the lettters CS will shortly reveal that it stands for Compressed Sensing.

      If it stood for Computer Science instead, the algorithm would have been able to sense that, in a compressed sort of way.

    • by ergean ( 582285 )

      CS - The inventor of Counter-Strike!!!

  • These are fancy words, for what is nothing else that automated educated guessing. (And re-vectorization.)

    Yes, you can guess that a round shape is round, even when a couple of pixels are missing. But you can not guess that one of these missing pixels actually was a dent. So this mechanism here would still make that dent vanish. Just in a less-obvious way. (Which can be very bad, if that dent was critical.)

    Essentially if you have a lossy process, you are always going to have a lack of details, and that’

    • by ceoyoyo ( 59147 )

      You've missed the point, which is not surprising considering the way the article is written.

      Compressed sensing exploits the observation that almost every useful image is actually sparse - it contains much less information than the pixels that make it up can store. Furthermore, if you undersample that image in the right way, the original data is recoverable.

      For a reasonable level of undersampling (and a sparse image) CS will give you a perfect reconstruction, just like gzip, for example. The important diff

  • I can finally stop reading the articles and the summaries, and apply this algorithm to the first post to understand the article instead. What a time saver!

  • As soon as I read the article, it seemed fishy to me. How can you create data where it doesn't already exist? If you take a scan of a patient, a tumour will either show up or not show up in the data. If it shows up, there's no need for enhancement. If it doesn't show up, no amount of enhancement can cause it to do so.

    Then I came across this blog post [wordpress.com] by Terence Tao, one of the researchers mentioned in the Wired article.

    It has some very interesting explanations of how this is supposed to work. I'm still not

    • The key is that the image must be sparse (and almost all useful images are sparse). By definition, a sparse image contains less information than the pixels that make it up can store. Thus, it is compressible. So you're not creating data where it doesn't exist, you're just not sampling and storing the redundant parts.

      It's no more magic than gzip or jpeg compression.

      • Sorry to be dense, but I don't understand where compression comes into this. You're not compressing anything, you're somehow discovering data that wasn't sampled in the first place. I don't see the relationship between the two concepts.

        Can you explain what I'm missing, in terms of my original example? If there's a dark spot on the image indicating a potential tumour, then that information is there in your data, and no clever processing is necessary. If the dark spot is not there, no amount of processing wil

        • by Emb3rz ( 1210286 )
          You're missing the scope of the sampling. It doesn't sample a 200x200 square and give you a 1024x768 image, it samples random pixels from the range you are looking to come out with in the end.</p>

          To put it in Javascript...

          for(rows=1; rows < maxrows; rows++){
          for(cols=1; cols < maxcols; cols++){
          if(Math.rand() < 0.2){ StorePixel(cols,rows) }
          }
          }

          And if taking an image of something that typically appears in the natural world, you will come out with a picture that is "not wrong." That means
          • It doesn't sample a 200x200 square and give you a 1024x768 image, it samples random pixels from the range you are looking to come out with in the end.

            Can you explain how picking a pixel at random is better than sampling every 4th pixel? Surely the randomness just increases the chance that you'll miss some essential feature in the image?

            Say the size of a potential tumour in the image is 5 pixels wide. Sampling every 4 pixels would guarantee you catch the tumour (the number of pixels to sample is chosen on the basis of the smallest size tumour which it is necessary to catch). Sampling an identical number of random pixels, on the other hand, would mean ther

        • by ceoyoyo ( 59147 )

          When you compress something you represent it in such a way that you can reconstruct the original based on less data. Effectively you're "discovering" data that wasn't sampled (stored) in the first place. Except, with lossless compression at least, you're not really doing this. The compression process discards only redundant data.

          Compressed sensing works in much the same way except that you effectively treat your acquisition and display process as you would your reading-from-disk-and-decompressing proces

  • Relevant information: I'm a physicist, and my research group is actively researching quantum state tomography via compressed sensing.

    This technique is quite useful also in quantum state tomography. Consider a qubyte. We represent it by an 2^8 x 2^8 matrix of complex numbers. Now we want to measure it. We have to make 2^16 measurements (keep in mind that a quantum measurement is a nontrivial task), and use this data to reconstruct the original matrix, which again is a very intensive task, if done right (ther

  • Chloe O'Brian has been able to do this for years while hiding in an improvised safe house with a computer array composed of old Vic-20s and acoustic modems.
  • [The inventor of CS, Emmanuel] Candès...

    The way I understand it, there is actually a bit of controversy over whether Candès or David Donoho [stanford.edu] "invented" compressed sensing. It seems to me that Donoho was actually first, but Candès ended up getting most of the credit.

  • This is an ENTIRE FIELD in the satellite remote sensing community.. Theres so many papers on improving limited satellite imagery its nauseating. Browse.. http://www.igarss09.org/Papers/RegularProgram_MS.asp [igarss09.org]
  • Nostradamus predictions... each new researcher recover new data from that noise. (each word of this should be quoted, as almost none is what it mean).

    Is risky to "fill in the blanks" or give your own (i.e. following a set of rules) meaning to noise, it will show things as you think they should be, and the exceptions will be missed or discarded.
  • Compressed sensing is the same mathematics behind the Rice single pixel camera [slashdot.org] covered on Slashdot a few years ago.

  • To the Zappruder film!
    /jk

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...