Translating Brain Waves Into Words 72
cortex writes with an excerpt from the L.A. Times: "In a first step toward helping severely paralyzed people communicate more easily, Utah researchers have shown that it is possible to translate recorded brain waves into words, using a grid of electrodes placed directly on the brain. ... The device could benefit people who have been paralyzed by stroke, Lou Gehrig's disease or trauma and are 'locked in' — aware but unable to communicate except, perhaps, by blinking an eyelid or arduously moving a cursor to pick out letters or words from a list. ... Some researchers have been attempting to 'read' speech centers in the brain using electrodes placed on the scalp. But such electrodes 'are so far away from the electrical activity that it gets blurred out,' [University of Utah bioengineer Bradley] Greger said. ... He and his colleagues instead use arrays of tiny microelectrodes that are placed in contact with the brain, but not implanted. In the current study, they used two arrays, each with 16 microelectrodes."
Re: (Score:2)
Re: (Score:3, Insightful)
Do you think in English, or do you think in abstract thoughts that your brain then later makes you think were direct English? I think there's a bit of debate on that, and it is something that's difficult to test.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
I have a good understanding of 4 languages and speak 3 fluently (English, Dutch, French, German)
I can attest to this in a certain extend: My thoughts are often also in concepts, but the "context" of a language differs greatly and the way people express themselves in the different languages have different nuances. Often it depends on the context I'm thinking to which language I switch if I'm actually
Re: (Score:2, Interesting)
2. What language do you dream in?
Out of pure interest, If you're bilingual as a child. What language do you count in?
Most people who learn a new language as a teen or adult find it easiest to count or do maths in the first language learnt. Even when they’ve been living in their new country for several years.
I found working with numbers in Japanese next to impossible. Until I used their money, now it’s effortless. Still can’t do times very well.
Fi
Re: (Score:1)
1. What language do you count in? 2. What language do you dream in?
Afrikaans is my mothertongue so I tend to count in that but for larger numbers I probably revert to English (First 10 years of my life was spent speaking Afrikaans exclusively, second 10 years was mostly English with Afrikaans at home). If you give me a very large number in English I would be able to visualise it a bit quicker than the same number in Afrikaans, but for lower more frequently used numbers they'd both be exactly the same. Just different words for the same concepts. As far as dreams go, it dep
Re: (Score:1)
Most people who learn a new language as a teen or adult find it easiest to count or do maths in the first language learnt.
I'm an example of that assertion. I'd also point out that numbers written out with digits are always English unless I'm really thinking about it. Reading a book or something in my head, I can read Spanish directly, but the numbers come out in English. Something like Cristobal Colón viajó en fourteen ninety-two. It definitely takes a little extra conscious nudge to translate 1492 into mil cuatrocientos noventa y dos if I'm reading the passage aloud, for example.
I had never thought about whether
Re: (Score:1)
As the others, when talking in a specific language I think it, too. Recently I started forcing myself
Re: (Score:2)
I deal with scientific stuff daily and thus I'm always thinking in a visualized concept first, words and numbers come afterwards.
Re: (Score:2)
I'm a native English speaker who used to be pretty good at Spanish (I wouldn't say fluent) but haven't used it in years, and speak a little Thai (I spent a year there in the USAF), and I don't usually think in words, either. But then, I never did, not even before learning Spanish or Thai. And like I said, I haven't used it in years and would probably be completely lost if I woke up in Mexico or somewhere.
I think mostly in pictures. I think all brains are different; some work in words, some in concepts, some
Re: (Score:2)
I am bilingual too. (French & English)
In my head I actually do both:
-Talk to myself in English or French when I'm alone (E.g. Programing, planning, etc)
-Think in concepts and then translate to French or English when I verbalize to someone.
The later is very obvious when you know what you mean but cannot find the right word in the language you happen to be speaking in. I'm sure monolinguals experience the same but the effect is not as obvious.
Re: (Score:1)
Ha! Here we go. I was looking at all the posts of people who thought only in concepts and not understanding. Your post makes much more sense to me. I think mostly in words, but not entirely. Sometimes I have a moment where I have an incomplete thought, because I can't find the word I'm looking for. Right now, as I type, it feels like I'm thinking 'in my fingers.'
I am not a visual person at all. I am almost entirely an audial person. I can remember what just about anything sounds like. I cannot remem
Re: (Score:2)
Re: (Score:3, Interesting)
Do you think in English, or do you think in abstract thoughts that your brain then later makes you think were direct English? I think there's a bit of debate on that, and it is something that's difficult to test.
A couple of things: when my wife switches between English and Cantonese her personality changes to suit the relevant culture. I can tell if she has been speaking Cantonese because she gets very aggressive. I think the behaviour is independent of language because sometimes she forgets to switch.
Sometimes I can have an epileptic seizure which causes me to remember spoken words in English, but this is kind of a replay from memory. I can also experience feelings which have no associated words because they were
Re: (Score:2)
It would be interesting to see if the brain waves for 'yes', 'no' etc. were similar in speakers of the same language, b/c the basic mouth movements are the same...
Re: (Score:1)
Re: (Score:3, Informative)
Technically, this is not reading, as in understanding, the speech centers. It's simply pattern matching. The speech center has a certain pattern of signals right before enunciating. A computer is trained to recognize that pattern and choose the appropriate word from a list.
Such a system would not be able to speak words that are not in it's training dictionary.
Moreover, the real flaw that I see is that this implementation requires that the subject actually be able to speak so that the system can be trai
The real philosophical issue.... (Score:5, Insightful)
is when you read their words, and they just say "kill me"
Re: (Score:2)
...and, after pulling the plug, their last brainwaves scream "That was just a pop-cultural reference meant as a joke, you stup..."
Re:The real philosophical issue.... (Score:4, Funny)
is when you read their words, and they just say "kill me"
"Brad, take a look at this...there's an 86% chance Mr. Pike's first word here is 'blow,' and a 14% chance it's 'kill'. Now what?"
*long pause*
*frantic beeping*
Re: (Score:2)
Re: (Score:2)
Supposedly this would be equivalent to a magical Babelfish translator, since brain waves cannot be language specific. However, the existance of a meta-language behind all the many different human languages of the world has never been conclusively proven. Therefore I think something is fishy with the claim.
But if the claim is true, the possibilities are staggering. Not just for stroke patients, but for anyone. Imagine being able to travel to any country and speak in their native language. It may still be a few years away, but I think it's really cool. And would it be possible to transmit thoughts that aren't even expressible in any human language? This really does sound like an exciting beginning. I remember attending a lecture by Freeman Dyson many years ago where he proposed something similar.
scary (Score:1)
how long before this evolves into something that can be used (after training the machine with direct interogation) to steal secrets from people's minds?
I mean... the movie just came out this summer.
Re: (Score:2)
Well, it's hooked up to your speech centre so it won't really be able to read your secrets unless you think them out loud with sensors attached directly to your brain.
Re: (Score:2)
Eat the Donut. Eat the donut. Eat the Donut. Eat the donut. Eat the Donut.
Re: (Score:2, Funny)
From TFA: (Score:3, Insightful)
It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)
In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.
FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.
Re: (Score:2, Funny)
Re: (Score:2)
Yes!
Re: (Score:2)
or "Gummy Bear"..but that is two words.
Re: (Score:1)
Re:From TFA: (Score:5, Insightful)
P300 is typically 300ms (thus the name), and the technique I was referring to uses two responses to generate a match (it flashes rows and columns so you need an X and Y response). 600ms or thereabouts is thus the time to beat. It's not lightning fast - nothing like typing - but a whole hell of a lot better than the reference methods that they're referring to. They're solving a brain-computer interface problem that was solved 10 years ago, and that was made irrelevant several years ago when cheap neural interfaces started hitting the commercial commodity market.
Of course this is all relying on TFA, which could be completely misrepresenting their research given the general high quality of modern science journalism.
Also, earlier kidding aside, the article is probably completely missing the point. It is likely that the actual purpose of the research is NOT to develop the current prototype's functionality. It is more likely that it is exploring the ability to take, reduce, and analyze data of this type. The fact that you can build (buy off-the-shelf for peanuts) a BCI whose functionality is equal to or greater than their prototype using less invasive methods is probably completely beside the point.
Re: (Score:2)
solved problem, neural interfaces in the commodity market - sounds like you're living a hundred years ahead of us :) But seriously - I haven't been following the field - can you provide some references for the current advances?
Re: (Score:2)
It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)
In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.
FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.
no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes
Re: (Score:2)
It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)
In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.
FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.
no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes
no yes no yes no no yes no no yes yes no no yes no yes no yes yes no no no no yes no yes yes no yes yes no no no yes yes no yes yes no no no yes yes yes yes no no yes no no yes yes yes yes yes yes
Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition.
Re: (Score:2)
It uses "not new" technology to select words with 50% accuracy from a list such as "yes" and "no"...really. (Okay, it hits 90% accuracy with only two items and goes down to 48% with 10.)
In other news, you can use P300 responses picked up with a $300 off-the-shelf over-the-hair EEG receiver to select from a grid of visual stimuli at a pretty good rate and with something like 95%+ accuracy (presumably nearly 100% with the sort of training that goes into touchscreen or voice activated interfaces). Those items can be letters, words, pictures...whatever. Anything quickly recognizable. Congrats guys, you just invented a crappy version of something I can buy for $300 which requires cutting open the person's skull and implanting things on the surface of their brain.
FYI, to whoever funded this, please give the lab I work at the grant monies next time. We'll make much better use of it.
no yes no yes no yes no no no yes yes no yes no no no no yes yes no no no no yes no yes yes yes no yes no no no no yes no no yes yes yes no yes yes no yes yes no no no yes yes no yes yes no no no no yes no no no no no no yes yes no no yes no no no yes yes no yes yes yes yes
no yes no yes no no yes no no yes yes no no yes no yes no yes yes no no no no yes no yes yes no yes yes no no no yes yes no yes yes no no no yes yes yes yes no no yes no no yes yes yes yes yes yes
Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition.
no yes no yes yes no no yes no yes no no no no no yes no no yes no no yes yes yes no yes no yes no no yes no no yes no no yes yes no no no yes no yes yes no no yes no no yes no no no no yes
Re: (Score:1)
Re: (Score:1)
there's no politics like academic politics. and from supposedly refined and non-confrontational people. graduate studies are imo heavily burdened with passive aggression and backstabbing training.
Dasher (Score:3, Insightful)
No News is... A Waste of Space (Score:2)
The technology is 100+ years old and has been used for 80 on human brain waves.
Almost 20 years ago, work at Radford was able to guess with 70 to 80 percent accuracy which of three possibilities within three parameters (size, shape and color) was being looked at, or being imagined with and without there being an attempt to verbalize it. They used a standard 16 channel external EEG. And a dozen different subjects.
Which "speech center(s)"? There's two main regions, neither of which can do the job alone. There'
Re:No News is... A Waste of Space (Score:4, Interesting)
Which "speech center(s)"? There's two main regions, neither of which can do the job alone.
Both.... From another article: "Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face -- basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding."
"One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area "lit up" when the patient was thanked by researchers after repeating words. It shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says."
As to the scary part, just wait till they get to the next step: 11x11 grids and not just 4x4
Source for this info: http://www.sciencedaily.com/releases/2010/09/100907071249.htm [sciencedaily.com]
Re: (Score:1)
The technology is 100+ years old and has been used for 80 on human brain waves.
Almost 20 years ago, work at Radford was able to guess with 70 to 80 percent accuracy which of three possibilities within three parameters (size, shape and color) was being looked at, or being imagined with and without there being an attempt to verbalize it. They used a standard 16 channel external EEG. And a dozen different subjects.
I think the point here is that they are NOT using an EEG. They are using what are called "Utah arrays," which are a relatively new technological advance even for monkey electrophysiologists, and this is one of the first times they have ever been implanted in a human brain. This is a new technology, if not a new application.
Also, being able to distinguish size, shape, and color is not really a substitute for being able to communicate. People have had electrodes implanted in motor cortex to substitute for mis
Waves into words (Score:2)
Modelling (Score:2)
Lets say you have the high resolution EEG grid they talk about and you control the input to the brain by isolating normal senses and feeding in specific stimuli. Keep it running for a couple of weeks. Might be easy if the patient/subject is elderly and sick.
Can I build a model of the brain between the stimuli and the EEG? Can I use this to make a copy of the brain at a functional level?
Re:Modelling (Score:5, Insightful)
Re: (Score:2)
However, training the brain and using an eeg apparently allows you to reconstruct images of what the person is thinking/remembering, as seen in a recent episode of House. [hulu.com]
If I recall correctly the technique was discussed on slashdot recently, though I couldn't find the article.
Re: (Score:1)
Re: (Score:1)
Indistinguishable from magic... (Score:5, Insightful)
This + cellphone technology + in-ear speaker = telepathy
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
This + cellphone technology + in-ear speaker = telepathy
Also indistinguishable from schizophrenia.
Also also indistinguishable from mind control.
I don't think I could ever trust a cell phone company enough to give them direct neural access.
Great for interrogation (Score:2)
I can see this being used to interrogate POWs!
Related device (Score:2)
I recall reading about a device that could analyze a combination of brain waves and just-under-the-skin neural impulses to interpret sub-vocalized speech. This new thing does not sound much better and is invasive as well. (Unfortunately, I have had no luck finding an authoritative source about the sub-vocal device.)
And (Score:1)
http://en.wikipedia.org/wiki/Fmri [wikipedia.org]
soon... (Score:1)
...blowjob and telepathic dirty talk at the same time!