Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Biotech Medicine Science Technology

Translating Brain Waves Into Words 72

cortex writes with an excerpt from the L.A. Times: "In a first step toward helping severely paralyzed people communicate more easily, Utah researchers have shown that it is possible to translate recorded brain waves into words, using a grid of electrodes placed directly on the brain. ... The device could benefit people who have been paralyzed by stroke, Lou Gehrig's disease or trauma and are 'locked in' — aware but unable to communicate except, perhaps, by blinking an eyelid or arduously moving a cursor to pick out letters or words from a list. ... Some researchers have been attempting to 'read' speech centers in the brain using electrodes placed on the scalp. But such electrodes 'are so far away from the electrical activity that it gets blurred out,' [University of Utah bioengineer Bradley] Greger said. ... He and his colleagues instead use arrays of tiny microelectrodes that are placed in contact with the brain, but not implanted. In the current study, they used two arrays, each with 16 microelectrodes."
This discussion has been archived. No new comments can be posted.

Translating Brain Waves Into Words

Comments Filter:
  • how long before this evolves into something that can be used (after training the machine with direct interogation) to steal secrets from people's minds?

    I mean... the movie just came out this summer.

    • Well, it's hooked up to your speech centre so it won't really be able to read your secrets unless you think them out loud with sensors attached directly to your brain.

    • Eat the Donut. Eat the donut. Eat the Donut. Eat the donut. Eat the Donut.

    • Re: (Score:2, Funny)

      Uh, the intelligence sector is way ahead of what the public gets to read about. They've had it for some time.
  • From TFA: (Score:3, Insightful)

    by kurokame ( 1764228 ) on Wednesday September 08, 2010 @04:32AM (#33506590)

    It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)

    In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.

    FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.

    • Re: (Score:2, Funny)

      an 95% accuracy with only one word :)
    • what about the speed of recognition? Give them some time, if they can improve the accuracy this approach could be interesting..
      • Re:From TFA: (Score:5, Insightful)

        by kurokame ( 1764228 ) on Wednesday September 08, 2010 @04:52AM (#33506654)

        P300 is typically 300ms (thus the name), and the technique I was referring to uses two responses to generate a match (it flashes rows and columns so you need an X and Y response). 600ms or thereabouts is thus the time to beat. It's not lightning fast - nothing like typing - but a whole hell of a lot better than the reference methods that they're referring to. They're solving a brain-computer interface problem that was solved 10 years ago, and that was made irrelevant several years ago when cheap neural interfaces started hitting the commercial commodity market.

        Of course this is all relying on TFA, which could be completely misrepresenting their research given the general high quality of modern science journalism.

        Also, earlier kidding aside, the article is probably completely missing the point. It is likely that the actual purpose of the research is NOT to develop the current prototype's functionality. It is more likely that it is exploring the ability to take, reduce, and analyze data of this type. The fact that you can build (buy off-the-shelf for peanuts) a BCI whose functionality is equal to or greater than their prototype using less invasive methods is probably completely beside the point.

        • They're solving a brain-computer interface problem that was solved 10 years ago, and that was made irrelevant several years ago when cheap neural interfaces started hitting the commercial commodity market.

          solved problem, neural interfaces in the commodity market - sounds like you're living a hundred years ahead of us :) But seriously - I haven't been following the field - can you provide some references for the current advances?

    • It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)

      In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.

      FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.

      no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes

      • It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)

        In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.

        FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.

        no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes

        no yes no yes no no yes no no yes yes no no yes no yes no yes yes no no no no yes no yes yes no yes yes no no no yes yes no yes yes no no no yes yes yes yes no no yes no no yes yes yes yes yes yes

        Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition.

        • It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)

          In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.

          FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.

          no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes

          no yes no yes no no yes no no yes yes no no yes no yes no yes yes no no no no yes no yes yes no yes yes no no no yes yes no yes yes no no no yes yes yes yes no no yes no no yes yes yes yes yes yes

          Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition.

          no yes no yes yes no no yes no yes no no no no no yes no no yes no no yes yes yes no yes no yes no no yes no no yes no no yes yes no no no yes no yes yes no no yes no no yes no no no no yes

    • This has nothing to do with an EEG. This is proof-of-concept for using a micromachined array. The fact that the LA Times article did not understand it should not surprise you.
    • there's no politics like academic politics. and from supposedly refined and non-confrontational people. graduate studies are imo heavily burdened with passive aggression and backstabbing training.

  • Dasher (Score:3, Insightful)

    by Two99Point80 ( 542678 ) on Wednesday September 08, 2010 @04:39AM (#33506614) Homepage
    These folks [cam.ac.uk] have something which is easier to control.
  • The technology is 100+ years old and has been used for 80 on human brain waves.

    Almost 20 years ago, work at Radford was able to guess with 70 to 80 percent accuracy which of three possibilities within three parameters (size, shape and color) was being looked at, or being imagined with and without there being an attempt to verbalize it. They used a standard 16 channel external EEG. And a dozen different subjects.

    Which "speech center(s)"? There's two main regions, neither of which can do the job alone. There'

    • by telomerewhythere ( 1493937 ) on Wednesday September 08, 2010 @06:31AM (#33507010)

      Which "speech center(s)"? There's two main regions, neither of which can do the job alone.

      Both.... From another article: "Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face -- basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding."

      "One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area "lit up" when the patient was thanked by researchers after repeating words. It shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says."

      As to the scary part, just wait till they get to the next step: 11x11 grids and not just 4x4

      Source for this info: http://www.sciencedaily.com/releases/2010/09/100907071249.htm [sciencedaily.com]

    • by SiMac ( 409541 )

      The technology is 100+ years old and has been used for 80 on human brain waves.

      Almost 20 years ago, work at Radford was able to guess with 70 to 80 percent accuracy which of three possibilities within three parameters (size, shape and color) was being looked at, or being imagined with and without there being an attempt to verbalize it. They used a standard 16 channel external EEG. And a dozen different subjects.

      I think the point here is that they are NOT using an EEG. They are using what are called "Utah arrays," which are a relatively new technological advance even for monkey electrophysiologists, and this is one of the first times they have ever been implanted in a human brain. This is a new technology, if not a new application.

      Also, being able to distinguish size, shape, and color is not really a substitute for being able to communicate. People have had electrodes implanted in motor cortex to substitute for mis

  • Do WWWWWWW and MMMMMMMMMMMM count as words?
  • Lets say you have the high resolution EEG grid they talk about and you control the input to the brain by isolating normal senses and feeding in specific stimuli. Keep it running for a couple of weeks. Might be easy if the patient/subject is elderly and sick.

    Can I build a model of the brain between the stimuli and the EEG? Can I use this to make a copy of the brain at a functional level?

    • Re:Modelling (Score:5, Insightful)

      by kurokame ( 1764228 ) on Wednesday September 08, 2010 @05:25AM (#33506740)
      To translate your query into a more IT-intelligible analogue, can you build a model of the internet by posting random things to Slashdot and then watching Google Trends?
      • @MichaelSmith, I guess that would be a "no". But what does "copying the brain" have anything to do with anything?

        However, training the brain and using an eeg apparently allows you to reconstruct images of what the person is thinking/remembering, as seen in a recent episode of House. [hulu.com]

        If I recall correctly the technique was discussed on slashdot recently, though I couldn't find the article.
      • No! The analogy needs cars! Where are the cars?
        • Can you build a map by supplying a fleet of cars with random control inputs and then observing accident statistics?
  • by sheriff_p ( 138609 ) on Wednesday September 08, 2010 @07:28AM (#33507234)

    This + cellphone technology + in-ear speaker = telepathy

    • by Belial6 ( 794905 )
      While I never believed that telepathy existed in humans, as a youth, I always found it odd that telepathy was considered impossible. You are right, although if you were going to make a brain reader, you should be able to make a writer that doesn't require an in ear speaker. The real trick that is still total sci-fi would be to splice up our genes so that we deposit whatever materials are necessary to make the radio reader/writer naturally in our bodies.
    • Why the speaker? If you're going to implant electrode grids on the cortex, why not go whole-hog and put one on the hearing center?
    • This + cellphone technology + in-ear speaker = telepathy

      Also indistinguishable from schizophrenia.

      Also also indistinguishable from mind control.

      I don't think I could ever trust a cell phone company enough to give them direct neural access.

  • I can see this being used to interrogate POWs!

  • I recall reading about a device that could analyze a combination of brain waves and just-under-the-skin neural impulses to interpret sub-vocalized speech. This new thing does not sound much better and is invasive as well. (Unfortunately, I have had no luck finding an authoritative source about the sub-vocal device.)

  • ...blowjob and telepathic dirty talk at the same time!

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...