Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Medicine Science

A Brain Scanner Combined With an AI Language Model Can Provide a Glimpse Into Your Thoughts 23

An anonymous reader quotes a report from Scientific American: Functional magnetic resonance imaging (fMRI) captures coarse, colorful snapshots of the brain in action. While this specialized type of magnetic resonance imaging has transformed cognitive neuroscience, it isn't a mind-reading machine: neuroscientists can't look at a brain scan and tell what someone was seeing, hearing or thinking in the scanner. But gradually scientists are pushing against that fundamental barrier to translate internal experiences into words using brain imaging. This technology could help people who can't speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. Current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery.

Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy. "There's a lot more information in brain data than we initially thought," said Jerry Tang, a computational neuroscientist at the University of Texas at Austin and the study's lead author, during a press briefing. The research, published on Monday in Nature Communications, is what Tang describes as "a proof of concept that language can be decoded from noninvasive recordings of brain activity."

The decoder technology is in its infancy. It must be trained extensively for each person who uses it, and it doesn't construct an exact transcript of the words they heard or imagined. But it is still a notable advance. Researchers now know that the AI language system, an early relative of the model behind ChatGPT, can help make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans. While current technological limitations prevent the decoder from being widely used, for good or ill, the authors emphasize the need to enact proactive policies that protect the privacy of one's internal mental processes. [...] The model misses a lot about the stories it decodes. It struggles with grammatical features such as pronouns. It can't decipher proper nouns such as names and places, and sometimes it just gets things wrong altogether. But it achieves a high level of accuracy, compared with past methods. Between 72 and 82 percent of the time in the stories, the decoder was more accurate at decoding their meaning than would be expected from random chance.
Here's an example of what one study participant heard, as transcribed in the paper: "i got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness." The model went on to decode: "i just continued to walk up to the window and open the glass i stood on my toes and peered out i didn't see anything and looked up again i saw nothing."

The research was published in the journal Nature Communications.
This discussion has been archived. No new comments can be posted.

A Brain Scanner Combined With an AI Language Model Can Provide a Glimpse Into Your Thoughts

Comments Filter:
  • ... for you to periodically prove that you are not one of those "dissenters"?
    • by gweihir ( 88907 )

      With the current course being steered? Not long. And the corrupted US Supreme Court is going to do nothing.

    • To moderate your argument: though there are inexpensive, automated, made-in-China polygraphs, these don't appear to be routinely used to find dissidents (in your sense of soon becoming mandatory to periodically prove something). fMRI costs millions per machine, needs specialized personnel, injections, and takes hours to complete one single exam, is going to have hard time to take off.

    • You're worrying too much. Those who want to know "what you think" have known it for ages from your facebook, twitter and slashdot shitposting, they don't need "AI models". You're not so hard to crack.

      • It's not the sheep that are the problem, it's the wolves. The ones who will curate a sheep's social media presence and go undetected.

        Or there are people like me, who have a minimal social media presence - currently I only read and post on Slashdot, and so far as I'm aware there are no obvious links to my real world identity in my post history. So somebody researching me would probably classify me as a potential risk because I'm "obviously hiding something".

        Not that you want to live in a world with brain s

  • ... welcome our new AI Powered Brain Scanning overlords
  • Because learning a 2nd language and becoming fluent in it usually teaches you how to think non-verbally as well. Cannot have the slaves being able to hide their thoughts in fascist America, now can we?

    • I know 3 languages, for some value of "know" and I can't think nonverbally for more than a few seconds before the words start pouring.

      Maybe you're just special and are projecting your own experience on others?

      Feynman claimed he could count in his head while reading but not writing. And his college roommate could only do the opposite. Or something.

      People are all kinds of unique between the ears. Maybe that's why this gizmo needs to be retrained for each user.

      • by gweihir ( 88907 )

        Yes, some people just acquire the ability to think in that foreign language as well. I have no idea as to the percentages for whom this works and I am unable to find a reference online, but generally learning a 2nd language should be the main trigger for developing this skill. As far as I remember I did discuss it with my language teacher (English and French) back in school. It may have been a hype back when, admittedly, and there may be far less substance to it in general than I thought.

        May also be connect

    • LOL, multiple languages are hardly a barrier. My phone scanner already has an OCR module that can parse quite well a wall scanned text in three or four languages.

      • by gweihir ( 88907 )

        I am not talking about thinking in a different language. I am talking about nonverbal thinking. Although thinking in a different language may give some protection as the patterns may be different than for your first language. I have no idea, really. Looks like more research is needed.

  • I wouldn't be surprised if 20 years from now, "Black Mirror" is regarded as a documentary.

  • The model went on to decode ...

    This suggests the brain encodes information in more than one modality: Not a difficult assumption since people can speak multiple languages, there must be some way for the brain to convert the noun/verb/adverb/emotion thoughts into multiple words within a language (synonyms) and between languages.

    I suspect brain decoding won't be accurate until a whole brain can be simulated and customized to match the target brain.

    Just like witch-pricking, phrenology, early psychiatry, graphology, polygraphs and state

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...