Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Space Science

NASA Develops Tech To Hear Words Not Yet Spoken 466

alex_guy_CA writes "Yahoo News has a story about technology that comes close to reading thoughts not yet spoken, by analyzing nerve commands to the throat. 'A person using the subvocal system thinks of phrases and talks to himself so quietly it cannot be heard, but the tongue and vocal cords do receive speech signals from the brain,' said developer Chuck Jorgensen, of NASA's Ames Research Center, Moffett Field, California. Jorgensen's team found that sensors under the chin and one each side of the Adam's apple pick up the brain's commands to the speech organs, allowing the subauditory, or 'silent speech' to be captured. The story indicates the method could be useful on space missions or other difficult working conditions."
This discussion has been archived. No new comments can be posted.

NASA Develops Tech To Hear Words Not Yet Spoken

Comments Filter:
  • by Cornelius the Great ( 555189 ) on Thursday March 18, 2004 @02:47AM (#8596238)
    The sequels to the Ender's Game (Orson Scott Card) books featured this technology as well. Ender would subvocalize with Jane, who travelled on the ansible.

    Good story, good universe. I hear there's an Ender's Game movie in the works.
  • by TekGoNos ( 748138 ) on Thursday March 18, 2004 @03:11AM (#8596362) Journal
    Most (all?) people actually "speak" when they think in words. This is most observable while reading.

    When you think (or read) "banana" your brain creates the same signals (but at lower magnitude) as if you would say it. Your tongue actually moves while your reading. Experiments with mute people have shown that they actually move their hands slightly, as if they were forming the words, they read, in sign-language.

    This technology does not read your thoughts, but the signals send to your vocal system. As it catches the signals before they reach the vocal system, it reads "words not yet spoken". If you speak the words or just think them doesnt mattern the system. However, the system doesnt reads your thougths. If you just imagine a banana, but do not think the word "banana", the technology wouldnt catch it (even if improved), as imaging a banana doesnt trigger a signal to the vocal system.
  • by zytheran ( 100908 ) on Thursday March 18, 2004 @03:39AM (#8596473)
    .. not what you think is printed before your eyes get around to reading it.

    "A person using the subvocal system thinks of phrases and talks to himself so quietly it cannot be heard, but the tongue and vocal cords do receive speech signals from the brain,"

    Notice the phrase "..talks to himself so quietly.."?
    This is NOT the same as "thinks to himself"

    i.e.you mouth the words but don't blow air through your airway so no noise is made.

    it's not friggin' mind reading..unlike most of the level 5 posts seem to think.
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Thursday March 18, 2004 @03:52AM (#8596519) Homepage
    Well, Hawking has a muscle disorder. Exactly what is involved in ALS is beyond me, whether it's just the muscles or if it's a problem with the nerves getting the signals to the muscles. If the problem is the formor, it may save him a lot of typing. If it's the latter, it would be of no use.

    ALS [wikipedia.org] affects the motor neurons themselves, resulting in the signals never reaching the muscles. Oversimplified it's much like a an electric motor with a broken wire: the motor might be fine, but it gets no electricity and subsequently does nothing. ALS is the same, but human muscles also suffer serious deteriouration when not being used.

    In any case this technology it would be of no use, as the muscles are fine, but the nerves aren't.
  • by John Courtland ( 585609 ) on Thursday March 18, 2004 @04:27AM (#8596624)
    There's this thing called LASH... I forget exactly what the acronym means, but I think it means, Los Angeles Silent/SWAT Headset. It is basically a collar that goes around your throat, and you mouth the words you want to say, and everyone else with a reciever can hear them. You don't whisper, you just silently make the mouth gestures and the machine amplifies them. Perfect for a SWAT team, or pretty much any military force.
  • by Red Alastor ( 742410 ) on Thursday March 18, 2004 @05:52AM (#8596884)
    Well... They talked about it *years* ago, I don't know if the news you heard is fresh. Orson talk about it in his book Characters and Viewpoint (c) 1988 They wanted to make Ender to be 16 at the start of the story.
  • by gantrep ( 627089 ) on Thursday March 18, 2004 @07:17AM (#8597134)
    The way I see it, there are three kinds of silent reading.

    The first involves real subvocalization, meaning people that read out loud very quietly and move their lips. I don't want to offend anyone, but I find it kind of annoying when people do that. Don't know why. The number of people that do this is in my experience, a minority and occurs most often in people that are weak readers or intentionally reading very carefully. I actually will do it myself when I am following chemistry lab instructions or sometimes when I read in Spanish, a second language.

    The next kind of reading is what I think is the most common. There is no subvocalization of any kind, no tongue moving inside the mouth, no vocal chord movement, but words written are mentally translated first to sound-thought and then comprehended. I'm not sure if sound-thought makes sense but it's the best description I can think of. Sound-thought is then what is actually processed and understood by the brain. This I think is most common because I believe that most people's thought process takes the form of an internal monologue. A notable exception is that I know when I do calculus, there is a lot of visualization, algebra, logic and simple math going on in my head and very little verbal thought.

    The last kind of reading I think could be considered a form of speed-reading, though I have never taken any speed-reading instruction. I use this whenever I am not reading for pleasure and the passage need not be read super-carefully. I look at the words very quickly and directly process them without first converting to internal sound-thought. If the passage is divided into narrow columns, I can process a whole line at a time and scan straight down the page without any lateral eye-movement.

    The level of comprehension and retention when I do this is worse than with methods one or two, but it's extremely fast. The biggest advantage for me anyway that it has over methods one and two is that I retain parts of the text visually. I have a quasi-photographic memory and when I took a course in US History for example, I was able to read names and dates off pages in visual memory because I read the text in that manner.

    Now, to get a little more back on topic....

    When I was in high school, I competed in quiz bowl trivia competitions like jeopardy, but the questions are only read aloud, and you can buzz in and interrupt the reader at any time before the question is finished and give an answer. Our team and other very good teams learned ways to anticipate the question.

    Some teams' familiarity with the subtle structure and format of the questions gave them such a strong intuition that weak teams would swear they were cheating. You got to know individual readers and how much they will continue to read after you buzz in(sometimes finishing the word, sometimes mid-syllable.) Which brings me to my point; if you watch somebody's mouth very carefully when they speak, as they are ending a word and starting to form the next, it's easy to tell what the next letter out of their mouth will be.

    A teammate of mine once rang in after a reader had read the letter "J" and then stopped. He could tell the next letter was going to be D, and there had already been several similar questions in earlier rounds asking for different information concerning the same subject that he could tell pretty certainly what the question was going to ask(questions often repeat subject but rarely answers between rounds) So he answered Holden Caulfield.

    The other team wasn't pleased at all.

    This skill could be practical in interrogations of criminal/terrorist suspects and is what I first thought of when I read the headline of the story. (A better headline would be Nasa develops computer processing of subvocal speech.) A suspect under a lot of stress may come close enough to giving you a name that you can tell from the shape of their mouth that it probably starts with an L rather than a T before refusing to speak any more, and that might be enough.
  • by phug ( 454216 ) on Thursday March 18, 2004 @09:41AM (#8597714)
    It's also used effectively in Cory Doctorow's "Down and Out in the Magic Kingdom." [craphound.com] Wuffie!
  • by YellowBook ( 58311 ) on Thursday March 18, 2004 @10:35AM (#8598162) Homepage
    Or the ability to "wiretap" the things floating around in someone's head, the dissents they thought that they were voicing only to themselves.

    This technology doesn't detect thoughts, it detects subvocalization. You don't normally subvocalize when thinking to yourself; it takes a conscious effort similar to talking out loud.

  • by Ieshan ( 409693 ) <ieshan@g[ ]l.com ['mai' in gap]> on Thursday March 18, 2004 @10:49AM (#8598326) Homepage Journal
    If you're seriously interested, you may want to check out

    http://www.pigeon.psy.tufts.edu [tufts.edu]

    It's a concept formation and learning lab using pigeons as a test system for abstract concept formation in the absense of language. There's an entire free book on the website with articles by some of the biggest researches in the field, along with live demos and other things.

    I didn't write it, but I work in the lab, and it's a great introduction to the field.
  • by Anonymous Coward on Thursday March 18, 2004 @10:51AM (#8598346)
    It's also featured on David Brin's novel "Earth", where computers can be managed using the subvocal system.
  • by asoap ( 740625 ) on Thursday March 18, 2004 @11:58AM (#8599145)

    Ugh, dude...

    You might want to read the article again.

    "Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement."

    Notice the part that says "without actual lip or facial movement"? So we have, "talking to yourself", and "no lip or facial movement". That sounds a lot like "thinking to yourself" to me.

    -asoap

  • by StateOfTheUnion ( 762194 ) on Thursday March 18, 2004 @01:15PM (#8600147) Homepage
    LASH uses a throat microphone. It straps to the throat and you have to speak (make noise) for the microphone to pickup. The advantage of a throat microphone is that ambient noise is not picked up by the microphone. According to one of the vendors ( swatheadsets.com [swatheadsets.com]) you are supposed to be able to be understood through the microphone while under the beating rotors of a helicopter.

    One disadvantage is that percussive sounds (In english, the sound of a "B", "T", "D" etc.) are not picked up by most throat mikes because these sounds are made mostly by the lips, not the throat.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...