Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Japanese Scientists Claim To Reconstruct Images From Brain Data 276

In a world first, a research group in Kyoto Japan has succeeded in processing and displaying optically received images directly from the human brain. Here's the Japanese press release for good measure. One step closer to broadcasting your dreams? The research is due to be published today in the US scientific journal Neuron
This discussion has been archived. No new comments can be posted.

Japanese Scientists Claim To Reconstruct Images From Brain Data

Comments Filter:
  • This is NOT new (Score:4, Informative)

    by oroborous ( 800136 ) on Thursday December 11, 2008 @03:11PM (#26078961)
    A Berkeley group has already reported this in Nature using similar methods: http://www.nature.com/nature/journal/v452/n7185/abs/nature06713.html [nature.com])
  • For the last time... (Score:5, Informative)

    by philspear ( 1142299 ) on Thursday December 11, 2008 @03:16PM (#26079059)

    People read blurby summaries, which don't include the results, the full reasoning, methods, etc, and then act as if it's the fault of the researchers. It's absurd, that's neither the paper nor the direct work of the researchers, it's some non-scientist working for a news source. Read the actual paper, TFA in these cases are rarely any better than TFS.

    http://download.cell.com/neuron/pdf/PIIS0896627308009586.pdf [cell.com]

    There's the PDF. It does have the very pixelated images. I haven't had time to read through it.

    As always, don't complain to me if you don't happen to have a subscription, and not having a subscription is no reason to act as if the results aren't real.

  • Re:No pictures? (Score:5, Informative)

    by Peeet ( 730301 ) on Thursday December 11, 2008 @03:18PM (#26079097)
    It's right there on the Japanese press release page, you can see at the bottom of the image at the top left of the article, they have the before and after of the word "neuron". Here, I'll make it even easier for ya: http://www2.asahi.com/kansai/news/image/OSK200812100099.jpg [asahi.com]
  • by Dr. Manhattan ( 29720 ) <(moc.liamg) (ta) (171rorecros)> on Thursday December 11, 2008 @03:21PM (#26079161) Homepage
    From what I can gather, they're pulling in rather low-level data - essentially 'listening in' on the very lowest level of pattern-recognition that's applied to the data coming in from the optic nerve. That's certainly interesting, but a whole lot more processing happens at higher levels before you 'see' anything. (C.f. people who lose sight early on due to eye problems, and have sight restored later - their brains can't do much with the information at first [newschool.edu].)

    Dreams appear to be based on the 'noise' coming in, but a lot of interpretation is applied (and without imposed constraints of consistency or logic). A common game/prank [ultimatecampresource.com] involves people asking yes/no questions about an alleged dream, but the answers they get are based on some simple scheme like "yes if the last word in the question they ask ends in a consonant". Surprisingly detailed 'stories' get constructed... by the person asking the questions. (Here's what appears to be an online version [callenish.com].) Actual dreams seem to be built in an analogous way, with the subconscious 'asking questions' of the senses (which are just feeding in 'static') and weaving an experience out of them.

    I'd guess that 'eavesdropping' on dreams via this means would only get the kind of swirling colors and such you 'see' when you close your eyes.

  • by philspear ( 1142299 ) on Thursday December 11, 2008 @03:25PM (#26079235)

    Maybe this image will not require a subscription, although I suspect it will.

    http://www.sciencedirect.com/cache/MiamiImageURL/B6WSS-4V4113M-P-7/0?wchp=dGLbVtz-zSkzk [sciencedirect.com]

    On the off chance it does, keep in mind this is not the full article. Critiques along the lines of "this doesn't prove anything," or "They should have done X" are premature if you haven't read the full (journal) article. If you thought of it, they probably covered that in the article you're not willing to pay for.

  • by Pedrito ( 94783 ) on Thursday December 11, 2008 @03:35PM (#26079413)

    [i]The visual cortex is one of the more understood areas of the brain, and decoding V1/V2 is low-hanging fruit.[/i]

    Low-hanging fruit? I agree, it's fairly well understood, but given the pre-processing that happens in the retinal ganglion cells, and the kind of data that actually ends up getting to V1 (after being relayed from the LGN), I'm surprised an actual image can be reconstructed from the information. After all, the RGCs tend to pass on things like movement, edges, contrast and color, but it's not even remotely pixel by pixel type data, which is precisely the informaiton that gets passed on to V1.

    Since it's coming from an fMRI, there's no way the image can be very detailed. I suspect it will be very low resolution.

  • Yes, it is (Score:5, Informative)

    by philspear ( 1142299 ) on Thursday December 11, 2008 @03:41PM (#26079515)

    Didn't read the full article, but from the abstract

    We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer...

    Our results suggest that it may soon be possible to reconstruct a picture of a person's visual experience from measurements of brain activity alone.

    The article you linked to seems to only be able to tell which object a person saw from their fMRI. I believe it required established measurements too, IE "this part of the brain lights up when they see a face. In blind studies, that part of the brain lit up, so they must have seen a face."

    Whether it required a calibration for each individual or not, no image reconstruction was done: it's not the same thing at all.

  • by DynaSoar ( 714234 ) on Thursday December 11, 2008 @03:45PM (#26079579) Journal

    The primary visual cortex (V1) has already been shown to be retinotopic. What's being seen can be mapped directly from the cortex. It's crude and low-res, but it works.

    20 years ago a researcher working with Karl Pribram at Radford University was able to detect signals from small cellular assemblies of the visual cortex that represented a particular shape being viewed without mapping the entire shape from V1.

    In both these, the images were received directly from the brain. In both they were digitally processed and presented. In all three what was retrieved was not an image, but was a pattern of neural electrical activity that they had already determined represented a particular visual field. They could not (in keeping with the /. tendency to represent reality with fiction) for instance, retrieve the third frame of a series of images that had been briefly presesnted. They would have had to show the image for some time that record EEG from the appropriate areas for long enough that they could get a good correlation when showing it a second time.

  • by mcgrew ( 92797 ) * on Thursday December 11, 2008 @03:52PM (#26079703) Homepage Journal

    I'm still waiting for Sally [wikipedia.org]; that story was set in the year 2120. I'm still waiting for R. Daneel as well.

    As to writing about stuff that never happened, THIS never happened - until now. The "hyperdrive" (what Roddenberry renamed "warp drive") was never invented - yet. Roddenberry and his writers were prescient, too. I remember a world without cell phones, flat screen talking computers, self-opening doors, and space shuttles (I remember a world without space travel at all).

    I merely mention Asimov because I thought of that story when I read TFA. He wasn't the only sci-fi author to predict advances, but he's the only one I can think of that predicted this one.

    The one thing I can think of that his crystal balls got wrong was Multivac, yet with its "terminals in every home and business" it was the closest of any pre-internet science fiction story I ever read to predicting the internet.

  • by Anonymous Coward on Thursday December 11, 2008 @04:18PM (#26080171)

    That's 10 x 10 pixels.

    Even the best dreams won't look like much at that resolution... - j

  • Re:No pictures? (Score:1, Informative)

    by Anonymous Coward on Thursday December 11, 2008 @04:27PM (#26080367)

    I can't read Japanese, however, that seems an awful lot like just an artists rendition to me.

  • by ceoyoyo ( 59147 ) on Thursday December 11, 2008 @05:12PM (#26081257)

    The images they used were very high contrast (black and white) and quite simple - squares, lines, crosses, etc. It's not really surprising you can reconstruct that sort of data from the visual cortex. Still, very cool. The paper is worth a look.

    I'll be really impressed when someone reconstructs a recognizable non-synthetic image. Oh, and how colour is represented.

  • by mcgrew ( 92797 ) * on Thursday December 11, 2008 @05:27PM (#26081545) Homepage Journal

    I looked it up on wikipedia [wikipedia.org] and found that A Logic Named Joe [baen.com] is posted on the internet, with a link from wikipedia.

    The tank is a big buildin' full of all the facts in creation an' all the recorded telecasts that ever was made--an' it's hooked in with all the other tanks all over the country--an' everything you wanna know or see or hear, you punch for it an' you get it. Very convenient. Also it does math for you, an' keeps books, an' acts as consultin' chemist, physicist, astronomer, an' tea-leaf reader, with a "Advice to the Lovelorn" thrown in. The only thing it won't do is tell you exactly what your wife meant when she said, "Oh, you think so, do you?" in that peculiar kinda voice. Logics don't work good on women. Only on things that make sense.

    It appears that Leinster beat Asimov to the punch; it's possible, being a science fiction fan before he was a science fiction writer, that Asimov even read "Joe". Wikipedia puts "Joe" at 1946, but Multivac in 1955 [wikipedia.org].

  • Re:Predictably (Score:3, Informative)

    by linhares ( 1241614 ) on Friday December 12, 2008 @12:34AM (#26086471)
    Funny comment, but just bad science on the paper. The brain's signals are, in all probability, address space decoders, and the very idea that one could see images in anything higher than V1 is as loony as thinking that the moon is a large cheese. These kinds of imbecile attempts to fame will continue, but decades before we can read the porn in your brain we will be able to build some (very) intelligent shit.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...