Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Improving Noise Analysis with the Sound of Silence 54

Roland Piquepaille writes "Researchers at Rockefeller University have built a mathematical method and written an algorithm based on the way our ears process sound that provides a better way to analyze noise than current methods. Not only is their algorithm faster and more accurate than previous ones used in speech recognition or in seismic analysis, it's also based on a very non-intuitive fact: they know what a sound was by knowing when there was no sound. 'In other words, their pictures were being determined not by where there was volume, but where there was silence.' The researchers think that their algorithm can be used in many applications and that it will soon give computers the same acuity as human ears. Read more for additional references and pictures about this algorithm."
This discussion has been archived. No new comments can be posted.

Improving Noise Analysis with the Sound of Silence

Comments Filter:
  • Eh? (Score:2, Funny)

    by RedOregon ( 161027 )
    What? WHAT? I can't hear you!
  • by Red Cape ( 854034 ) on Monday June 12, 2006 @03:17PM (#15518807) Homepage Journal
    I hear dead people..
  • by bsdluvr ( 932942 ) on Monday June 12, 2006 @03:20PM (#15518837) Homepage
    ...the Rockefeller University researchers remained absolutely silent."
  • Taggers... (Score:1, Informative)

    by Anonymous Coward


    Tag this as "rolandpiquepaillespam"
    • Why? (Score:2, Insightful)

      by spun ( 1352 )
      He's stopped linking to his damned blog, so he's not getting undeserved hits. I too was kinda pissed when every third story was a piquepaille post, where he reprinted stories from other sources nearly verbatim interspersed with random pointless comments, but he's not doing that anymore, so what's the big deal?
  • by DanteLysin ( 829006 ) on Monday June 12, 2006 @03:21PM (#15518851)
    'In other words, their pictures were being determined not by where there was volume, but where there was silence.'

    If only I could sell this theory to my wife.
  • by ishmalius ( 153450 ) on Monday June 12, 2006 @03:25PM (#15518892)
    This seems almost analogous to dark image analysis in astronomy. This started out as merely taking a photo of nothing, to find the aberrations of the collector. This dark image would be subtracted from the target image, to produce an improved version with a lot of the artifacts removed. It has since grown to basically modeling the environment, and judging what an image of nothing would look like according to the model.
  • by Overzeetop ( 214511 ) on Monday June 12, 2006 @03:33PM (#15518953) Journal
    Sorry, just took a break from a busy day, and didn't see the obvious useless reply post.
  • They'll already have all of the senses ready to plug in. Just earlier on slashdot this article was posted about providing the robot sense of touch: article.pl?sid=06/06/11/1656248 [slashdot.org]

    Not only that, but there have been numerous articles on the development of electronic eyes. By the time they've got all the kinks worked out in AI they'll already be able to let the new robots sense our world in the way we do. The only thing they're really missing are the senses of smell and taste. I can imagine those won't be ne
    • But isn't taste a little more complicated than that? When I eat a cookie, I might say it's delicious. But when you eat the cookie, you think it is dry and tasteless. Who is the ultimate authority on what an object "tastes like"?

      Granted, it's easy enough for two people to agree that a lemon is sour instead of sweet, but where exactly is the line between the two? Will you and me (and our Robo-Taster 2000) agree every time that an object is sour instead of sweet? Maybe, if we give it a numerical value, bu
  • it's also based on a very non-intuitive fact: they know what a sound was by knowing when there was no sound.

    Surely there has to be more to it than that? Not only is it not "non-intuitive", it's completely bloody obvious, so much so that I already assumed that people did this in professional recording situations.

    Think about it, it's just like weighing two things together, and then finding the weight of one item by weighing the other and taking the difference between the two measurements.

    • I think that what the researchers did was very non-intuitive. They describe the sound mathematically by the locations of the silent points. This requires a much smaller representation than describing the multitude of locations that are non-zero. The assumption (I believe they use an actual theorem) is that the points of silence uniquely describe a sound. i.e. there are no two sounds that have the exact same set of zero-points. You are right in that subtractive synthesis of noise from a signal is a tech
  • Well, first off, FTA: "In fact, he notes, it may be the same type of method the brain actually uses. " (emphasis mine)

    Second, we can all very easily deduce that our interpretation of sound deals with a very low signal-to-noise ratio. How many background sounds are we dealing with constantly? How surprising is it that analyzing sound subtraction (cancellation) from the noise is as effective as analyzing addition?

    I'm no hearing expert, and I'm definitely not an expert when it comes to algorhythmic sou
  • but let me be the first (and last) to ask "But will it be possible to run this Zen processing on top of Xen?"
  • by Rob T Firefly ( 844560 ) on Monday June 12, 2006 @04:11PM (#15519218) Homepage Journal
    May bee this will fie Nelly meme the and off half fast peach wreck ignition soft where.
  • I've seen work (sorry no web links) where a guy did audio/acustic testing for the purposes of using audible sound to guide robots. Despite it not being intuitive, using standard PC sound hardware, he was able to get very accurate readings. Adding this algorythm to the mix of software he was using might actully give robots very useful audible sensors.

    I'm not sure how it would work, but he was able to determine position and distance quite well, but was having some issue with the different densities of materia
  • The Sounds of Silence. I'm sure that the original submitter, the Slashdot moderator, Slashdot itself, and everyone who downloaded the text time of "Sounds of Silence" when they opened this page will be hearing from the RIAA lawyers shortly. And for only $3000 in protection money, they'll leave you all alone until the next time.
  • Sorry but Simon and Garfunkel already published their research on this.
  • Half the time, I think people feel that science is out of their reach because the articles they read about it don't give them enough information to even start learning about it. When science is presented like this it gets reduced to blurbs with the logical content of zen koans. "sound from no sound" "noise from silence" A little more and I've got a haiku. I appreciate the need for simple summaries, but comeon, at least link to the meat of it.
    • Re:Original paper? (Score:4, Informative)

      by iamlucky13 ( 795185 ) on Monday June 12, 2006 @07:53PM (#15520604)

      Heck...a little bit more and you've got a Simon and Garfunkel song. Perhaps you could even explain why the words of the prophets are written on the subway wall. But I digress.

      The summary is completely useless, and the article isn't much better. After reading the description 3 times, I figured out the graph, at least. X-axis is time, Y-axis is frequency, and color is amplitude, so it's essentially a time dependent power spectrum density (PSD) with anything above a cutoff amplitude shown in black. I believe the Navy uses a variation of this called a waterfall to help interpret sonar sounds. I got stuck again, though, reading their description of the sample.

      The two researchers used white noise -- hissing similar to what you might hear on an un-tuned FM radio -- because it's the most complex sound available, with exactly the same amount of energy at all frequency levels.

      Aside from the apparent infinite-energy contradiction if this were true, the graph clearly shows that the signal is both frequency and time dependendent. Obviously, that's the case they ultimately have to deal with to apply this method, but the article suggests otherwise.

      As for what they're actually doing, presumably, instead of operating on a set of data that includes time, frequency, and amplitude, they are cutting it down to time, frequency, and sound/no sound. This would cut the data size by the amplitude resolution (eg, 1/16th for a 16 bit amplitude sampling). This must assume that amplitude is irrelevant to the sound, which based on my (limited) experience working with PSD's, I'm skeptical of. Perhaps the odds of getting the same digital time/frequency data for two different sources is low enough that this can be ignored, much like the likelihood of two data sets yielding the same MD5 sum is non-zero but insignificant.

      • Actually, there's no cutoff. They transform the standard sonogram using phase-derived data into what you see. It effectively concentrates information, lowering the entropy and clearly enhancing feature recognition. If you want to know more, read the paper (not the article -- the article is garbage).
      • Disclaimer: I'm a PhD student in pretty much the same field, but I'm not experienced enough that anyone should take my opinions as fact.

        TFA looks interesting but overhyped to me.

        The summary is completely useless, and the article isn't much better. After reading the description 3 times, I figured out the graph, at least. X-axis is time, Y-axis is frequency, and color is amplitude, so it's essentially a time dependent power spectrum density (PSD) with anything above a cutoff amplitude shown in black.

        Not

    • I appreciate the need for simple summaries, but comeon, at least link to the meat of it.

      The second link in the slashdot summary includes a link to the full text of the research article.

      Half the time, I think people feel that science is out of their reach because the articles they read about it don't give them enough information to even start learning about it. When science is presented like this it gets reduced to blurbs with the logical content of zen koans. "sound from no sound" "noise from silence"

  • We're still waiting for an algorithm that will restore Nixon's "accidentally" erased Watergate tape. If ever there were a sound that wasn't there...

    -dB

  • I couldn't find any factor of improvement so it's hard to say how fantastic this is and digging deeper i failed the math but wondered about if the beautiful pictures of white noise was just random number generators or equipment noise ? If anyone could tell me more I would be interested...
  • I work in the area of audio processing (speech recognition) and I can tell you that they
    do not have anything worthy of a press release. They distinguished a pure tone from noise,
    which is a very easy thing to do. Other than that, they just have a pretty picture. It's
    not useful for anything.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...