Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Biotech Medicine

Machine Translates Thoughts Into Speech 93

Posted by timothy
from the think-I-better-not-wear-one dept.
An anonymous reader points to this explanation of a brain-machine interface for real-time synthetic speech production, which has been successfully tested in a 26-year-old patient. From the article: "Signals collected from an electrode in the speech motor cortex are amplified and sent wirelessly across the scalp as FM radio signals. The Neuralynx System amplifies, converts, and sorts the signals. The neural decoder then translates the signals into speech commands for the speech synthesizer."
This discussion has been archived. No new comments can be posted.

Machine Translates Thoughts Into Speech

Comments Filter:
  • Amazing. (Score:4, Interesting)

    by johncadengo (940343) on Friday January 01, 2010 @05:26AM (#30613214) Homepage

    Imagine the implications for people with cerebral palsy or paralysis of similar nature. I would always cringe when I watched someone who had been severely limited in their motor functions and could not speak, but with the help of an unconventional system, could communicate. They would stare at letters on a placard, and would spell out (at a rate worse than texting!) each word letter by letter. Or they would attach a rod to the forehead of the person and have them peck at a screen, again, typing out each word letter by letter. I get frustrated enough texting with one hand--these people have amazing patience.

    There is a movie, based on a book based on a true story, called the Diving Bell and the Butterfly [wikipedia.org] where this man gets into an accident and was thought to be in a vegetative state, but actually was fully conscious and aware of everything around him. This is called locked-in syndrome [wikipedia.org] and it is scary to even imagine. He ended up being able to communicate with the outside world by BLINKING. And even blinking was difficult for him, since he only had control of one eyelid. The nurse would slowly speak out letters in order of the most frequently used (in this case, he was French, so the letters were in order of the frequency of letters in French words) and he would blink to indicate that this was the correct letter. Needless to say, this was a very long and tedious process. But, as a testament to the perseverance of the human spirit, he actually wrote a book sharing his experiences of being in this state.

    Imagine the freedom he would have experienced at being able to talk again.

    I really hope this becomes a reality.

  • Hawking (Score:5, Interesting)

    by dreamchaser (49529) on Friday January 01, 2010 @05:28AM (#30613230) Homepage Journal

    Imagine adapting this type of technology to other forms of input, such as a thought controlled dictation system. Imagine how much more someone like a Hawking could accopmlish.

  • by Richard Kirk (535523) on Friday January 01, 2010 @06:54AM (#30613444)

    That seems an unnecessary piece of anti-science paranoia. The people doing the experiment are not the white coated demons of science fiction. Even if they were as amoral as you suggest, it would sdtill be practical of them to get the patient's permission before starting an experiment that took over three years to set up.

    On the radio recently, I heard about the difficulties the doctors had with an even more extreme 'locked-in' case that had no eye movement. They got the patient to communicate one bit at a time by imagining tasting milk or lemon juice for minutes at a time. This caused the patient's saliva to change pH. This was not simply "think lemons if it is ok to operate", followed by "oh, bother, best of three?" - they had to establish that the intelligence was present, understanding what was being said, and replying in a reliable manner.

    There is a bit about milk-or-lemons and other attempts to communicate in... http://www.newscientist.com/article/mg19526171.500-humans-can-adapt-to-almost-anything-even-paralysis.html?full=true [newscientist.com]

  • by DynaSoar (714234) on Friday January 01, 2010 @07:04AM (#30613460) Journal

    Most implant approaches use electrodes shoved in from the outside intending them to work immediately. That invasive technique leaves the person open to infection, and the neurons contacted tend to die fairly quickly, requiring yet another round of more of the same. This approach takes a long time, but eliminates the chance of infection (after the obviously necessary implantation) and lets neurons grow into and around the electrodes, so none of them producing signal are likely to die off soon, allowing long term contact and communication.

    I'm sure there will be improvements on this, but this looks to me to be the first really viable direct neural signal collection technique.

    "Five years ago, when the volunteer was 21 years old, the scientists implanted an electrode near the boundary between the speech-related premotor and primary motor cortex (specifically, the left ventral premotor cortex). Neurites began growing into the electrode and, in three or four months, the neurites produced signaling patterns on the electrode wires that have been maintained indefinitely.

    Three years after implantation, the researchers began testing the brain-machine interface for real-time synthetic speech production. The system is “telemetric” - it requires no wires or connectors passing through the skin, eliminating the risk of infection. Instead, the electrode amplifies and converts neural signals into frequency modulated (FM) radio signals. These signals are wirelessly transmitted across the scalp to two coils, which are attached to the volunteer’s head using a water-soluble paste. The coils act as receiving antenna for the RF signals. The implanted electrode is powered by an induction power supply via a power coil, which is also attached to the head."

    Rather than risking killing off speech center neurons in the implant process, they instead implant them in the pathway through which the speech center communicates outbound. Previous attempts by others went directly for the primary processing centers. This small change shows remarkable thinking foresight. I'd call this the first true hack in neural interfacing.

    The only point of clarification I'd add is to say "through the scalp" instead of "across"; the latter more often implies a lateral vector. And the only point I'd request is, if only the scalp needs to be traversed, is the transmitter between the skull and scalp? It appears so but isn't stated s such in the paper (the PLoS article's URL is at the bottom of TFA). In any case, the FM transmission through the scalp does away with all the permanent jacks and sockets that SF and Hollywood have always used to signify brain/machine interfacing. With this one implementation, the future image of neural interfacing becomes something like a hair net with buttons sewn into it (we already have EEGs like this). Someone call Larry Niven. Wireheads will be buttonheads.

    A future hack will almost certainly be to collect the signal wires running from the scalp to a second transmitter operating between the person and the machine. This will eliminate the direct connection and allow movement, including ambulatory data collection and processing. That not only makes possible testing in realistic situations, but also neural control of machine mediated locomotion for the paralyzed, without being restricted to the length of a cable. An obvious inclusion here would be a transmitter at the machine with receiver on the person, running the signals into the relevant muscle groups. This will also take some power induction that may be greater than the FM systems being used can handle. And are we not on the verge of getting wireless power induction for operating such devices, the same technology intended to refresh batteries and even run laptops?

    A bit farther in the future will be to switch from spike analysis of neural firing to time/frequency analysis of synchronized activity such as EEGs examine. The former require computation that's commonly available. The latter require continuous wavelet analysis that s

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...