Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Technology

Brain Imaging Reveals the Movies In Our Mind 141

wisebabo sends word that scientists from UC Berkeley have developed a method for scanning brain activity and then constructing video clips that represent what took place in a person's visual cortex (abstract). The technology is obviously quite limited, and "decades" away from any kind of sci-fi-esque thought reading, but it's impressive nonetheless. From the news release: "[Subjects] watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or 'voxels.' ... The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie."

This discussion has been archived. No new comments can be posted.

Brain Imaging Reveals the Movies In Our Mind

Comments Filter:
  • Re:Recording (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Friday September 23, 2011 @10:40AM (#37491430) Homepage Journal

    Well, what this system is doing is selecting from a library of video clips the 100 clips whose brain signatures when viewed were closest to the brain signatures sensed later. So the video is lo-fi because of averaging of the images. More signal processing, particularly a stats model for excluding red herrings, will give this system higher specificity in selecting the video to match the sensed signature.

    It will take another breakthru to generate a 3D image (more likely than a 2D image, which is really just a derived artifact of the 3D brain activity) directly from the sensed brain activity rather than selecting a correlating video. But that breakthru is at least as likely to occur in the SW/data video realm as in the brain/sensor realm. Because we might be able to use large video libraries, and swatches within their 2D images, as primitives from which to synthesize the 3D visual image, rather than building the recreated images from raw voxels. Which is, I believe, precisely how the brain does it.

    What will also come along is reading and stimulating other brain sensory (apperceptory, really) regions that are active when we think we're having a hi-fi memory. Often the details are not remembered, but we "remember" that we are remembering the details, in the "metadata". That level of resynth will require the breakthru of stimulating those brain regions, not just reading mappable sensory regions. But with this research at UC Berkeley I think we are now in the (very long and difficult) phase of "working out the details". We are over the watershed into the new age.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...