Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Science

Your Brain Uses 'Autocorrect' To Decipher Language and AI Just Helped Us Prove It, New Study Says (thedebrief.org) 52

An anonymous reader quotes a report from The Debrief: How do we how know to speak and to read? These essential questions led to new research from the Massachusetts Institute of Technology that uses AI models to examine how and why our brains understand language. Oddly enough, your brain may work just like your smartphone's autocorrect feature. The new study, published in the Proceedings of the National Academy of Sciences, shows that the function of these AI language models resembles the method of language processing in the human brain, suggesting that the human brain may use next-word prediction to drive language processing.

In this new study, a team of researchers at MIT analyzed 43 different language models, many of which were optimized for next-word prediction. These models include the GPT-3 (Generative Pre-trained Transformer 3), which can generate realistic text when given a prompt, or other ones designed to provide a fill-in-the-blanks function. Researchers presented each model with a string of words to measure the activity of its neural nodes. They then compared these patterns to activity in the human brain, measured when test subjects performed language tasks like listening, reading full sentences, and reading one word at a time. The study showed that the best performing next-word prediction models had activity patterns that bore the most resemblance to those of the human brain. In addition, activity in those same models also correlated with human behavioral measures, such as how fast people could read the text.

The new study results suggest that next-word prediction is one of the key functions in language processing, supporting a previously proposed hypothesis but has yet to be confirmed. Scientists have not found any brain circuits or mechanisms that conduct that type of processing. Moving forward, the researchers plan to build variants of the next-word prediction models to see how small changes between each model affect their processing ability. They also plan to combine these language models with computer models developed to perform other brain-like tasks, such as perception of the physical world.

This discussion has been archived. No new comments can be posted.

Your Brain Uses 'Autocorrect' To Decipher Language and AI Just Helped Us Prove It, New Study Says

Comments Filter:
  • So basically we are just walking mad libs generators? That is really ( )!
    • It dons't tkae AI to porve taht hmuans use atuorcrocet to dcefier lagnugae. If yuo udnertsnad tihs snetecne tehn cognrautlaitons, yuo jsut porevd it yorueslf.

      • Re:Don't need AI (Score:4, Interesting)

        by jenningsthecat ( 1525947 ) on Tuesday October 26, 2021 @09:07AM (#61927949)

        It dons't tkae AI to porve taht hmuans use atuorcrocet to dcefier lagnugae. If yuo udnertsnad tihs snetecne tehn cognrautlaitons, yuo jsut porevd it yorueslf.

        TFS proved that as soon as you read it - and I'm betting by your failure to mention it that you weren't even aware of the fact at the time. The first sentence of the summary reads "How do we how know to speak and to read?"

        BTW, this built-in auto-correct is why copy editors read articles backwards on the first pass. Doing that lets them concentrate on individual words and word order rather than the corrected version that their brains automatically present to their conscious thought streams.

        • consider.
          the human brain uses a percentage of acceptance for what something is.
          a simple application is identifying an ambush predator

          • consider. the human brain uses a percentage of acceptance for what something is. a simple application is identifying an ambush predator

            Good point. I was thinking of our built-in 'autocorrect' exclusively in the domain of language, but you're right - it's more universal than that.

            One of the more interesting areas where this might come into play is in memory. Memories are far, far less reliable and factual, and much more labile, than most people realize. I suspect we autocorrect memories based both on desire, and in accordance with external information that we judge to be more authoritative.

        • by Z00L00K ( 682162 )

          And the "autocorrect" thing sometimes goes completely awry - especially when you listen to music texts.

          • by tsqr ( 808554 )

            And the "autocorrect" thing sometimes goes completely awry - especially when you listen to music texts.

            You mean music lyrics? Like Hendrix singing "Excuse me while I kiss this guy" or the gospel hymn, "Gladly The Cross-eyed Bear"?

        • by epine ( 68316 )

          Doing that lets them concentrate on individual words and word order rather than the corrected version that their brains automatically present to their conscious thought streams.

          That's complete nonsense. The brain mostly fills in what you don't look at. When reading at speed, there's a lot on the page we don't look at. But when I slow down enough to read word by word in the forward direction, I get almost no interference from my predictive system unless my mind wanders.

          Cook-book misprint costs Australian pub [bbc.co.uk]

  • by nyet ( 19118 ) on Monday October 25, 2021 @11:41PM (#61927013) Homepage

    Languages that have gender baked into their grammars have extra (redundant) information encoded into every sentence. Our brains use this extra redundant information like error correcting codes to fill in corrupted or noisy sentences (e.g. misspellings, words overheard in a noisy room, etc.)

    https://www.sciencedaily.com/r... [sciencedaily.com]

    All languages have some sort of redundant information built in.

    • Languages that have gender baked into their grammars have extra (redundant) information encoded into every sentence.

      Regardless of gender-sensitive words, it does help to have some redundancy in a language. I have noticed this in experiments with designing computer languages. If you chop out all the redundant declarations, and get a clever compiler to infer what the code means, a typo might result in something that the compiler will accept, but is certainly not what you meant.

    • by Merk42 ( 1906718 )
      *busts through the door*
      Did someone say "gender"? Well I identify as an attack helicopter! LOLOL!!
  • Christ I hope my brain is not as bad as the autocorrect on phones.
    • I think so. The article said "autocorrect", but you seem to have read it as "autocorrect." To put it in other words that might get past your autocorrect filter, the article is actually about squids.

    • Christ I hope my brain is not as bad as the autocorrect on phones.

      Our brains kind of are that bad, actually. But "bad" is good, when it's the best you can do.

      I'm hearing impaired, and my brain makes some pretty hilarious word guesses sometimes. It puts together a best guess based on the input it can get.

      • If you just look at the sound and try to make it into words, there is just not enough information there. That is why we saw speech understanding plateau after a while. Not so much weak algorithms as an impossible task.

        But it is not just prediction. It is taking sounds that could be for several different words and figuring out which is the most likely. That involve grammar analysis, common word combination analysis, but to also understand the context of what is being discussed. And it is phrases, not ju

  • You can't add more than 2 numbers at a time. You can't language a word "correctly" without the next word, or in a type of "context". Seems right.
  • I got the subject by entering "a new" and randomly hitting words. This sentence is a proposed one for the latest edition of the currents that researchers will be able to use .... lol
  • by mveloso ( 325617 ) on Tuesday October 26, 2021 @12:34AM (#61927135)

    They'll will find that human brains will essentially load X number of high-probability next word combinations continuously. That's what jokes do: they take unexpected paths or use unexpected combinations of words.

    Wasn't this settled in the 80s or 90s?

    • This is basically what I meant with my numbers post, but with a difference - jokes and comedy is a specific context, that doesn't generalize to all language. What I think they are zeroing in on is a general ruleset of context for all language. Which is analogous to the ruleset of numbers - your brain can't add more than 2 numbers together, as a rule of operations. That's the context of how your brain approaches words.
  • explain the guy in school who would repeat (mutter) what he just said under his breath every time he talked?

    • I mean, I'm not prone to pure speculation, but I guess stuttering, Turret's, and other language handling bugs might be tangentially addressed by this avenue of research? Who knows?
  • by AntisocialNetworker ( 5443888 ) on Tuesday October 26, 2021 @03:52AM (#61927399)

    The title is wrong. The article says
    “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

    • by jd ( 1658 )

      The editor autocorrected it, thus proving the model. Ish.

    • I recall in one of James Burke's CONNECTIONS video, he demonstrated this autocorrect feature. He had 2 people, one would speak from a script quickly and the second would repeat what they just heard. The script had a word missing from a sentence to make it sound odd. That was unknown to the listener or viewer the first time he played back the video of the experimental results. The second time, he slowed the video playback to show the listener had inserted the missing word.

      I thought this was the opposite o
  • in other news, sky reportedly blue; water may be wet but we need a grant to check.

    Fuck's sake.

  • > How do we how know to speak and to read?

    Some of us can, those that can't become /. editors.

    Back on topic... There was a thing a few years back about how you can read words with the mdldie leterts jmbelud up. Autocorrect at its best, I'd say ;-)

  • I recall listening to an audio recording decades ago (I think AES made it) that removed longer and longer sections of a voice recording. The point of the recording was the initial muted sections were not really noticed by users with the brain just filling in the gaps. The initial gaps would be a single syllable. As the gaps got longer, people would notice, but still fill in the gaps which could be as long as a few words.
  • It's got nothing to do with language. I'll be stunned if they find any language processing like that. That's just what brains do with absolutely everything. The next turn, the next word, the next weather pattern, the next job task, the next key-press, the next opinion, the next behaviour, the next taste, the next trade, the next trend. It's why we have stereotypes and politeness. It's why we have forecasts and fortune tellers. It's why we have astronomers and astrologers. Psychologists and psychics.

    • Good point. Language processing is only a subset of the general organizational processing of the human mind. Nevertheless, I don't think it's misleading to identify this specific kind of processing as language processing. Moreover, it's not merely a matter of the brain. A Slashdot article some years ago demonstrated that some preprocessing is done by the nervous system prior to the signal actually reaching the brain. This is partly why one can misread a sign at a distance and honestly see it as saying one t
      • I can attest to that -- more times per day the older I get. The one that caught me when I started driving decades ago was having my eyes tell me that the stop sign was near, when it was actually still many seconds away. Felt like an idiot stopping 100-yards from it.

        I'll also add that there can be a half-second delay every time you turn your gaze to something new -- most obvious with a fast-moving clock that seems to freeze for a very long "second" until you realize it's a clock, and then you can't repeat

  • I nvr wld hv gssd. Ths s rly xctng.
  • Oddly enough, your brain may work just like your smartphone's autocorrect feature

    I really hope no one's brain is as bad as a smartphones autocorrect.

  • ``If u cn rd ths u cld gt a gud job''

    were really onto something.

  • Oddly enough, your brain may work just like your smartphone's autocorrect feature.

    Any smartphone autocorrect feature worth having would have toasted that use of the phrase "just like" with a full-on lithium battery fire.

Always draw your curves, then plot your reading.

Working...