Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science

AI Hints at How the Brain Processes Language (axios.com) 11

Predicting the next word someone might say -- like AI algorithms now do when you search the internet or text a friend -- may be a key part of the human brain's ability to process language, new research suggests. From a report: How the brain makes sense of language is a long-standing question in neuroscience. The new study demonstrates how AI algorithms that aren't designed to mimic the brain can help to understand it. "No one has been able to make the full pipeline from word input to neural mechanism to behavioral output," says Martin Schrimpf, a Ph.D. student at MIT and an author of the new paper published this week in PNAS.

The researchers compared 43 machine-learning language models, including OpenAI's GPT-2 model that is optimized to predict the next words in a text, to data from brain scans of how neurons respond when someone reads or hears language. They gave each model words and measured the response of nodes in the artificial neural networks that, like the brain's neurons, transmit information. Those responses were then compared to the activity of neurons -- measured with functional magnetic resonance (fMRI) or electrocorticography -- when people performed different language tasks. The activity of nodes in the AI models that are best at next-word prediction was similar to the patterns of neurons in the human brain. These models were also better at predicting how long it took someone to read a text -- a behavioral response. Models that exceled at other language tasks -- like filling in a blank word in a sentence -- didn't predict the brain responses as well.

This discussion has been archived. No new comments can be posted.

AI Hints at How the Brain Processes Language

Comments Filter:
  • Maybe AI can explain the process of dupes in the brain.

    • [FP was natural stupidity. Only an in-law meta-relation to AI and not a constructive contribution to dialog.]

      Story is of interest to me, but I always want to wait for the book. So I'll go ahead and throw out a couple of books related to the topic? The best may have been Descartes' Error by Antonio Damasio. Mechanical approach? Maybe I am a Strange Loop by Douglas Hofstadter, though his earlier Godel, Escher, Bach is better. Of recent relevance is The Enigma of Reason by Mercier and Sperber. Maybe so

      • You cannot understand the brain. Ever.
        • by shanen ( 462549 )

          Unclear, but I think your comment is some flavor of "perfection is the enemy of the good".

          On the one hand, I insist we can understand anything, even something as complex as the human brain, better. On the other hand, I doubt we understand anything, even something as trivial as your reply, perfectly.

          In more worldly and concrete terms, I think it is obvious that some corporate cancers understand human brains quite well enough to manipulate them with extremely negative results. Consider the Meta-Facebook or th

  • , including OpenAI's GPT-2 model that is optimized to predict the next words in a text,

    I've seen some headlines (can't get to them now) where the next word is anyone's guess. They were not the straight-forward, "Man robs home" or "Car crashes into tree". They were more, "Florda man robs drug dealer to pay for dog's surgery" types.

    Start feeding those types into the AI and see what it comes up with.
  • Anytime we have a new technology, clocks, computing, there are always analogies to the brain.

    Wes know we fill in the blanks. This is why so many people gets convicted for crimes they did not commit. This is why we can have innuendo

    I met a guy who drives a truck

    He can't tell time but he sure can ____

    But analogy does not lead to process.

    • There are theories that suggest that intent affects reasoning so the original intent must be considered part of the logic, no matter the intent. While the more inputs echo that intent the more likely that is the most logical path to a conclusion.

  • If you build a model that seems to predict results that happen in real life, you can't just turn around and say, "And therefore, that's how it works in real life." It's still just a model.

    A 12-inch-long remote-controlled car might look exactly like a Tesla, down to exact scale parts, and it might run on an electric motor. But the glaring, obvious differences between that and a real Tesla only begin with the fact that I can't get inside and drive it.

    • But at the same time, knowledge is measured in how many things we know -> that can/are applied to actual tasks. Nobody awards a simple math major for knowing math, but if that math major creates a practical way to predict how matter will behave with that math knowledge? While all the time knowing that applying that knowledge is much more practical still than observing the event itself.

2 pints = 1 Cavort

Working...