Your Brain Uses 'Autocorrect' To Decipher Language and AI Just Helped Us Prove It, New Study Says (thedebrief.org) 52
An anonymous reader quotes a report from The Debrief: How do we how know to speak and to read? These essential questions led to new research from the Massachusetts Institute of Technology that uses AI models to examine how and why our brains understand language. Oddly enough, your brain may work just like your smartphone's autocorrect feature. The new study, published in the Proceedings of the National Academy of Sciences, shows that the function of these AI language models resembles the method of language processing in the human brain, suggesting that the human brain may use next-word prediction to drive language processing.
In this new study, a team of researchers at MIT analyzed 43 different language models, many of which were optimized for next-word prediction. These models include the GPT-3 (Generative Pre-trained Transformer 3), which can generate realistic text when given a prompt, or other ones designed to provide a fill-in-the-blanks function. Researchers presented each model with a string of words to measure the activity of its neural nodes. They then compared these patterns to activity in the human brain, measured when test subjects performed language tasks like listening, reading full sentences, and reading one word at a time. The study showed that the best performing next-word prediction models had activity patterns that bore the most resemblance to those of the human brain. In addition, activity in those same models also correlated with human behavioral measures, such as how fast people could read the text.
The new study results suggest that next-word prediction is one of the key functions in language processing, supporting a previously proposed hypothesis but has yet to be confirmed. Scientists have not found any brain circuits or mechanisms that conduct that type of processing. Moving forward, the researchers plan to build variants of the next-word prediction models to see how small changes between each model affect their processing ability. They also plan to combine these language models with computer models developed to perform other brain-like tasks, such as perception of the physical world.
In this new study, a team of researchers at MIT analyzed 43 different language models, many of which were optimized for next-word prediction. These models include the GPT-3 (Generative Pre-trained Transformer 3), which can generate realistic text when given a prompt, or other ones designed to provide a fill-in-the-blanks function. Researchers presented each model with a string of words to measure the activity of its neural nodes. They then compared these patterns to activity in the human brain, measured when test subjects performed language tasks like listening, reading full sentences, and reading one word at a time. The study showed that the best performing next-word prediction models had activity patterns that bore the most resemblance to those of the human brain. In addition, activity in those same models also correlated with human behavioral measures, such as how fast people could read the text.
The new study results suggest that next-word prediction is one of the key functions in language processing, supporting a previously proposed hypothesis but has yet to be confirmed. Scientists have not found any brain circuits or mechanisms that conduct that type of processing. Moving forward, the researchers plan to build variants of the next-word prediction models to see how small changes between each model affect their processing ability. They also plan to combine these language models with computer models developed to perform other brain-like tasks, such as perception of the physical world.
Mad libs (Score:2)
Don't need AI (Score:3)
It dons't tkae AI to porve taht hmuans use atuorcrocet to dcefier lagnugae. If yuo udnertsnad tihs snetecne tehn cognrautlaitons, yuo jsut porevd it yorueslf.
Re:Don't need AI (Score:4, Interesting)
It dons't tkae AI to porve taht hmuans use atuorcrocet to dcefier lagnugae. If yuo udnertsnad tihs snetecne tehn cognrautlaitons, yuo jsut porevd it yorueslf.
TFS proved that as soon as you read it - and I'm betting by your failure to mention it that you weren't even aware of the fact at the time. The first sentence of the summary reads "How do we how know to speak and to read?"
BTW, this built-in auto-correct is why copy editors read articles backwards on the first pass. Doing that lets them concentrate on individual words and word order rather than the corrected version that their brains automatically present to their conscious thought streams.
Re: (Score:2)
consider.
the human brain uses a percentage of acceptance for what something is.
a simple application is identifying an ambush predator
Re: (Score:2)
consider. the human brain uses a percentage of acceptance for what something is. a simple application is identifying an ambush predator
Good point. I was thinking of our built-in 'autocorrect' exclusively in the domain of language, but you're right - it's more universal than that.
One of the more interesting areas where this might come into play is in memory. Memories are far, far less reliable and factual, and much more labile, than most people realize. I suspect we autocorrect memories based both on desire, and in accordance with external information that we judge to be more authoritative.
Re: (Score:2)
And the "autocorrect" thing sometimes goes completely awry - especially when you listen to music texts.
Re: (Score:2)
And the "autocorrect" thing sometimes goes completely awry - especially when you listen to music texts.
You mean music lyrics? Like Hendrix singing "Excuse me while I kiss this guy" or the gospel hymn, "Gladly The Cross-eyed Bear"?
Re: (Score:2)
Re: (Score:2)
That's complete nonsense. The brain mostly fills in what you don't look at. When reading at speed, there's a lot on the page we don't look at. But when I slow down enough to read word by word in the forward direction, I get almost no interference from my predictive system unless my mind wanders.
Cook-book misprint costs Australian pub [bbc.co.uk]
ECC (Score:3)
Languages that have gender baked into their grammars have extra (redundant) information encoded into every sentence. Our brains use this extra redundant information like error correcting codes to fill in corrupted or noisy sentences (e.g. misspellings, words overheard in a noisy room, etc.)
https://www.sciencedaily.com/r... [sciencedaily.com]
All languages have some sort of redundant information built in.
Re: ECC (Score:2)
Re: (Score:2)
Yeah, that's the joke.
Latin people are overwhelmingly not adopting the world "Latinx" which is frankly demeaning. You can't put all those nationalities and cultures under one umbrella. Hispanic was a pretty good descriptor because it means descended from people who speak/spoke Spanish, it didn't try to lump people together in any way which wasn't factual. I'm not Latinx, I'm Mexican-American.
Re: (Score:3)
Haha, yeah, I'm Mexican American too and I agree. In fact a month or so ago during a meeting I learned that some idiot has coined yet another new term for us: Latiné. I laughed out loud and told everyone that I'd rather just be called "That Mexican guy." This is another example of how people try to use language in order to exonerate themselves of racism, but often end up being racist anyway. We are now officially required to use Latine, even though I'm not even done laughing at Latinx, which came on th
Re: ECC (Score:2)
La troca es bonita.
El carro es bonito.
For those that can understand a BIND config but can't manage a foreign language, here's an example of the gender "parity" the OP is referring to, and it has nothing to do with people. Not the driver, occupants, owner etc., it's just that noun is feminine and that noun is masculine. The iOS word suggestions and spell checker definitely figured out that some words follow other words, but I doubt it really cares about grammar, just statistics? You can mess the gender up
Re: (Score:2)
Languages that have gender baked into their grammars have extra (redundant) information encoded into every sentence.
Regardless of gender-sensitive words, it does help to have some redundancy in a language. I have noticed this in experiments with designing computer languages. If you chop out all the redundant declarations, and get a clever compiler to infer what the code means, a typo might result in something that the compiler will accept, but is certainly not what you meant.
Re: (Score:1)
Did someone say "gender"? Well I identify as an attack helicopter! LOLOL!!
I hope not (Score:2)
Re: (Score:2)
I think so. The article said "autocorrect", but you seem to have read it as "autocorrect." To put it in other words that might get past your autocorrect filter, the article is actually about squids.
Re: (Score:2)
Christ I hope my brain is not as bad as the autocorrect on phones.
Our brains kind of are that bad, actually. But "bad" is good, when it's the best you can do.
I'm hearing impaired, and my brain makes some pretty hilarious word guesses sometimes. It puts together a best guess based on the input it can get.
Computer speech understanding is impossible (Score:2)
If you just look at the sound and try to make it into words, there is just not enough information there. That is why we saw speech understanding plateau after a while. Not so much weak algorithms as an impossible task.
But it is not just prediction. It is taking sounds that could be for several different words and figuring out which is the most likely. That involve grammar analysis, common word combination analysis, but to also understand the context of what is being discussed. And it is phrases, not ju
Preprint link (Score:1)
Similar to numbers, then? (Score:2)
a new title for the fact that you have (Score:2)
Yes, your brain is bayesian in nature (Score:3)
They'll will find that human brains will essentially load X number of high-probability next word combinations continuously. That's what jokes do: they take unexpected paths or use unexpected combinations of words.
Wasn't this settled in the 80s or 90s?
Re: (Score:2)
Does this in any way (Score:2)
explain the guy in school who would repeat (mutter) what he just said under his breath every time he talked?
Re: (Score:2)
Not "prove" - "suggest" (Score:4, Insightful)
The title is wrong. The article says
“It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”
Re: (Score:2)
The editor autocorrected it, thus proving the model. Ish.
Re: (Score:2)
I thought this was the opposite o
More tales of the bleeding obvious (Score:2)
in other news, sky reportedly blue; water may be wet but we need a grant to check.
Fuck's sake.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Wait, why'd that post anonymously? I'm signed in and didn't check that silly box...
Well obviously you didn't check the box - if you had you would have realized that it was checked and you would have un-checked it!
Ain't English grand?
Re: (Score:2)
Re: (Score:2)
Yeah, the previous story was about Google's MetaVerse, but my autocorrect read Malware.
Too bad your autocorrect didn't change "Google's" to "Facebook's" for you.
Oh gees... (Score:2)
> How do we how know to speak and to read?
Some of us can, those that can't become /. editors.
Back on topic... There was a thing a few years back about how you can read words with the mdldie leterts jmbelud up. Autocorrect at its best, I'd say ;-)
Re: Oh gees... (Score:2)
Some of us can
Some of us were brought up reading The Grauniad.
Sounds similar to aural processing (Score:2)
Welcome to everything (Score:2)
It's got nothing to do with language. I'll be stunned if they find any language processing like that. That's just what brains do with absolutely everything. The next turn, the next word, the next weather pattern, the next job task, the next key-press, the next opinion, the next behaviour, the next taste, the next trade, the next trend. It's why we have stereotypes and politeness. It's why we have forecasts and fortune tellers. It's why we have astronomers and astrologers. Psychologists and psychics.
Re: (Score:2)
Re: (Score:2)
I can attest to that -- more times per day the older I get. The one that caught me when I started driving decades ago was having my eyes tell me that the stop sign was near, when it was actually still many seconds away. Felt like an idiot stopping 100-yards from it.
I'll also add that there can be a half-second delay every time you turn your gaze to something new -- most obvious with a fast-moving clock that seems to freeze for a very long "second" until you realize it's a clock, and then you can't repeat
Y dnt sy (Score:1)
I hope not (Score:2)
Oddly enough, your brain may work just like your smartphone's autocorrect feature
I really hope no one's brain is as bad as a smartphones autocorrect.
So those ads in the comic books: (Score:2)
were really onto something.
Autocorrect worth having (Score:2)
Any smartphone autocorrect feature worth having would have toasted that use of the phrase "just like" with a full-on lithium battery fire.