Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Science

Researchers Claim Their AI Algorithm Can Recreate What People See Using Brain Scans (science.org) 27

Slashdot readers madsh, Ellis Haney, and sciencehabit all submitted this report from Science: A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen....

Many labs have used AI to read brain scans and re-create images a subject has recently seen, such as human faces and photos of landscapes. The new study marks the first time an AI algorithm called Stable Diffusion, developed by a German group and publicly released in 2022, has been used to do this.... For the new study, a group in Japan added additional training to the standard Stable Diffusion system, linking additional text descriptions about thousands of photos to brain patterns elicited when those photos were observed by participants in brain scan studies. Unlike previous efforts using AI algorithms to decipher brain scans, which had to be trained on large data sets, Stable Diffusion was able to get more out of less training for each participant by incorporating photo captions into the algorithm....

The AI algorithm makes use of information gathered from different regions of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, a systems neuroscientist at Osaka University who worked on the experiment. The system interpreted information from functional magnetic resonance imaging (fMRI) brain scans, which detect changes in blood flow to active regions of the brain. When people look at a photo, the temporal lobes predominantly register information about the contents of the image (people, objects, or scenery), whereas the occipital lobe predominantly registers information about layout and perspective, such as the scale and position of the contents. All of this information is recorded by the fMRI as it captures peaks in brain activity, and these patterns can then be reconverted into an imitation image using AI. In the new study, the researchers added additional training to the Stable Diffusion algorithm using an online data set provided by the University of Minnesota, which consisted of brain scans from four participants as they each viewed a set of 10,000 photos.

If a study participant showed the same brain pattern, the algorithm sent words from that photo's caption to Stable Diffusion's text-to-image generator.

Iris Groen, a neuroscientist at the University of Amsterdam who was not involved with the work, told Science that "The accuracy of this new method is impressive."
This discussion has been archived. No new comments can be posted.

Researchers Claim Their AI Algorithm Can Recreate What People See Using Brain Scans

Comments Filter:
  • by Drethon ( 1445051 ) on Saturday March 11, 2023 @04:50PM (#63362371)

    I'm not sure if I'm impressed with the AI, or with the sensors that allow enough detail for pattern recognition to recognize the images from brainwaves.

  • Dream Recorder (Score:5, Interesting)

    by Jamu ( 852752 ) on Saturday March 11, 2023 @05:07PM (#63362407)
    I wonder if this could be used to record dreams.
    • If the visual cortex is busy when dreaming, for sure. However, it could be dreams are activating parts of the brain the visual components feed into, so no. Crazy that we're at a point where we could actually do the experiment, you just need a a large dose of hallucinogens and an AI or human to give descriptions, so the tripper can give a thumbs up or down on whether the description is right, or some feedback mechanism.

      • Just asked chat GPT, for what its worth:

        "Yes, the visual cortex is active during both hallucinations and dreams. When we hallucinate, the brain creates perceptions of something that is not actually present, often due to a sensory or perceptual error. During this process, the visual cortex can become activated, leading to the perception of visual images.

        Similarly, during dreaming, the brain generates a range of vivid sensory experiences, including visual images. Studies using neuroimaging techniques such as

      • How about when you are thinking of something you saw?

        Maybe you witnessed a crime, and you seat thru this scanning while thinking / recalling what you saw, so that the investigators have a very good image of what you actually saw?

        If thinking of something you saw activates the visual cortex to "let you see it in your mind", this could be the next lie detector, sort of.

        Slippery slope I guess, especially if it shows things you recalled accidentally too, during the scan.

    • by narcc ( 412956 )

      Why not? Imaginary technology can do anything you want.

    • Came here to ask this very thing. If it worked, that'd be a technological feat that's the stuff of....well, dreams.
    • by RobinH ( 124750 )
      This is what I was thinking. There's actually a movie called "Until the End of the World" where this is a primary plot point.
    • As they die
    • Or the imagination.
      What if you could open the field of visual artistic expression to (almost) everyone, without the need to learn to use a paintbrush or Photoshop or the rules of perspective and anatomy.

  • Everyone knows the retinal terminus theory [youtube.com] says the last image a dying person sees is burned into their retina. All you have to do is mount their head and shine a bright light through eyes to see that image.

    Granted, you can only do this once, but still . . .

  • by ceoyoyo ( 59147 ) on Saturday March 11, 2023 @06:10PM (#63362509)

    What they can do is record you looking at an image and choose which one it is out of the ones they've recorded you looking at previously.

    You still only get a very rough image, but if you take the text description from the training set and feed it to stable diffusion, it will give you back a nice picture.

    The previous work reconstructing images from fMRI of the visual system is impressive, although it's still not arbitrary images. As far as I can tell, this is really more just messing around with stable diffusion.

    • Indeed. They're just matching patterns/signals generated from their own fixed dataset.

      First they record the patterns whilst the subject looks at a known image, then they show that image again & try to make a blind match using.

      Each image has a text description, and when they make a match, they throw that text description to Stable Diffusion.

      They could have just as well given each image a number & reported that number when they got a correlated signal.

      Not to diminish their work. It's still all very im

      • by ceoyoyo ( 59147 )

        It isn't really that scientifically impressive when you did something that's pretty common, but instead of reporting "image 1" you put the description of that image into stable diffusion. Its press release impressive though.

        • It's not necessarily new or ground breaking, however I definitely wouldn't call it "common". Especially being able to reliably identify 10,000 different complex neurological patterns. I'm also surprised that different people generate similar patterns for the same image. Given our non-homogenous biological nature, I would have expected more variation between individuals.

          • by ceoyoyo ( 59147 )

            I definitely wouldn't call it "common".

            It's an open source dataset (they didn't collect it). Anybody can do the same thing.

            I'm also surprised that different people generate similar patterns for the same image.

            They trained specific models for each of the four participants.

    • At least it shows that there is some similar organization in the brain among individuals at more than a gross level. They seemed to handwave away the differences as measurement problem though instead of a self-organization problem.
      • by ceoyoyo ( 59147 )

        I wouldn't conclude that. They say all their models were built on a per-subject basis. I don't see anywhere where they compare the consistency of the activation patterns. They only have a brief discussion of how their (subject-specific) models produce varying quality of results.

        The human visual system has been mapped with fMRI and other means, and the lower levels are fairly consistent.

  • Go watch the movie "Futureworld" so see what I mean.
  • What would be even better would be if they could lift images from the brain of someone recently murdered.
    It seems like cool tech with good intentions but as is human nature some of us always seem to find ways of turning these things to great evil.

Keep up the good work! But please don't ask me to help.

Working...