Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Study Urges Caution When Comparing Neural Networks To the Brain (mit.edu) 167

Anne Trafton writes via MIT News: Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis. In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells -- key components of the brain's navigation system -- the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. "What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices," says Rylan Schaeffer, a former senior research associate at MIT. Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.
Mikail Khona, an MIT graduate student in physics, is also an author. "When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing," says Ila Fiete, the senior author of the paper and professor of brain and cognitive sciences at MIT.

"Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model," adds Mikail Khona, an MIT graduate student in physics who is also an author. "If you use the correct constraints, then the models can give you a brain-like solution."
This discussion has been archived. No new comments can be posted.

Study Urges Caution When Comparing Neural Networks To the Brain

Comments Filter:
  • Back propagation (Score:5, Insightful)

    by Viol8 ( 599362 ) on Friday November 04, 2022 @05:14AM (#63023651) Homepage

    This is the fundamental low level operating principle of ANNs. However as far as anyone can tell the human brain doesn't use it at all so comparing ANNs to the human brain is like comparing a digital computer to an analogue one. They may produce similar results now and then but they have little else in common other than a few important components: neurons and transistors.

    • Re:Back propagation (Score:5, Informative)

      by DamnOregonian ( 963763 ) on Friday November 04, 2022 @05:36AM (#63023677)
      Backpropagation is not the fundamental low level operating principle of ANNs.
      It is how you train ANNs.
      Obviously, neuroplasticity has very few applicable analogues between brain matter and ANNs.
      Evolution of course also has its own training regime that's a bit wasteful for ANN training purposes.

      This article isn't about that.
      This article is about unconscious bias in researchers, and warning them against it.
      Researchers, being unconsciously biased by the knowledge of how grid cells worked, inadvertently steered ANNs to form grid cell structures, claiming that they were a natural outcome of all path integration training.

      MIT showed that they're actually the rarest outcome, unless you steer training to produce other structures with no biological analogues (single-location sensitive place cells).
      I.e., path-integration training without bias injection normally creates a path integration network that works nothing like the human brain's.

      The results, however, are the same.
      • Re: (Score:3, Interesting)

        by Viol8 ( 599362 )

        "Backpropagation is not the fundamental low level operating principle of ANNs.
        It is how you train ANNs."

        Thats like saying the ability to learn isn't a fundemental property of a brain when its probably its most important ability.

        But if you must split hairs then another difference between a brain and an ANN is a brain can continue learn while it operates normally, ANNs in general cannot.

        • Re:Back propagation (Score:5, Informative)

          by DamnOregonian ( 963763 ) on Friday November 04, 2022 @05:50AM (#63023691)

          Thats like saying the ability to learn isn't a fundemental property of a brain when its probably its most important ability.

          It's not like that at all.

          That, in fact, is probably the most fundamental difference between NNs and ANNs- the lack of neuroplasticity, but that just further outlines the fact that backpropagation is not a fundamental low level operating principle of ANNs.

          Once an ANN is "trained" (which in an NN does not mean trained from information (though it can)), no further backpropagation occurs.

          When training an ANN, you're not just trying to give it "life experience" that a NN would experience. You're also giving it the training of several hundred million years of evolution. Backpropagation is just how that's done in a sane fashion. The neurons don't operate on that principle at all. The selection of their weights does.

          But if you must split hairs then another difference between a brain and an ANN is a brain can continue learn while it operates normally, ANNs in general cannot.

          Absolutely.
          But the brain's ability to learn is not a fundamental operating principle of it.
          Different animals have evolved differing levels of learning capacity and neuroplasticity, and brains operate across the entire imaginable spectrum.
          Plenty of NNs within your brain have no learned input whatsoever. That's because neuroplasticity is in fact itself trained by evolution.

        • by Improv ( 2467 )

          There are animal brains out there that might not ever learn or form memories, being prewired for nearly all, perhaps all behaviour.

          (simpler insects are an example)

          • There are animal brains out there that might not ever learn or form memories, being prewired for nearly all, perhaps all behaviour.

            (simpler insects are an example)

            That's quite likely true, but, given that unicellular life has the ability to learn, it's very possible that these insects have evolved from ancestors that had the ability to learn and that then they have lost it in return for a much more specific and specialised set of behaviours arising from their genetic "programming"?

            If true, it wouldn't take away from the importance of your suggestion that learning is not always the most important thing, but, just as flight is fundamental to birds but flightless birds

            • by Improv ( 2467 )

              Perhaps it will at least get people thinking about what the term "fundamental" means in this context.

    • by 1s44c ( 552956 )

      That's exactly what I wanted to say. Back propagation is used because it works far better than anything else on modern electronic hardware. It's not biologically plausible at all. There were attempts to use Hebbian learning but it's just not practical on modern electrical computers or GPUs.

      There are a few other blatant differences between deep learning and real evolution produced biological brains:

      Digital electronics uses discreet time steps so all changes happen at the same time. Animal brains definitely d

      • That's exactly what I wanted to say. Back propagation is used because it works far better than anything else on modern electronic hardware. It's not biologically plausible at all. There were attempts to use Hebbian learning but it's just not practical on modern electrical computers or GPUs.

        Absolutely. But that's also not relevant to the article.
        How a network is trained is not an underlying principle of the network's operation.

        There are a few other blatant differences between deep learning and real evolution produced biological brains:

        Oh, there are practically infinite differences. That's not terribly relevant.
        A predictive model rarely exactly mimics that which it models ;)

        Digital electronics uses discreet time steps so all changes happen at the same time. Animal brains definitely don't have coordinated time steps.

        That's not a problem at all.
        Physics in general operates without quantized time, and yet we can simulate the universe to ridiculous precision.

        Deep learning loves one-hot encoding and having neurons actually mean something, Animal brains are much more sub-symbolic and tolerant of individual neuron failure. Neurons don't mean things in animal brains, patterns of firing do.

        Nonsense.
        Animal brains are more tolerant of individual neuron failure because they have

        • by Viol8 ( 599362 )

          "The largest ANN in existence is GPT-3, and it's still 3 orders of magnitude away from a human brain. 1,000 times smaller."

          Want to have a guess how many neurons a bee has? You know, those little insects that can visually communicate with each other and navigate miles to and from their hive?

          Biological brains are far more powerful not just in practice but in principle than ANNs.

          • Want to have a guess how many neurons a bee has? You know, those little insects that can visually communicate with each other and navigate miles to and from their hive?

            Are you going to try to claim that GPT-3 isn't smarter than a bee?
            How many bees that you're aware of can compose language that passes the Turing test?

            A bee uses somewhere around 14 billion parameters to do its job.
            An ANN tuned to do its job could do it with 1-2 orders of magnitude less.
            ANNs are nearly always more efficient at their job than an NN, because they've been specifically trained to do that job.

            Nature has no training mechanism as efficient as backpropagation.

            • by Viol8 ( 599362 )

              "Are you going to try to claim that GPT-3 isn't smarter than a bee?"

              GPT-3 isn't smart at all and anyone who claims otherwise need to lay off the KoolAid. Its a statistical regurgitator merged with a clever parser.

              "ANNs are nearly always more efficient at their job than an NN, because they've been specifically trained to do that job."

              Define efficient. Run them on biological hardware and they'd crawl.

              "Nature has no training mechanism as efficient as backpropagation"

              You truly are full of shit. Do check out the

              • GPT-3 isn't smart at all and anyone who claims otherwise need to lay off the KoolAid. Its a statistical regurgitator merged with a clever parser.

                I suspected this was the root of this discussion for you.
                You have a magical definition of smart that no one else has. How exciting!

                Define efficient. Run them on biological hardware and they'd crawl.

                Work done on a per-neuron-analogue basis.

                You truly are full of shit. Do check out the power requirements for training an ANN.

                And what, pray tell, do you think the power requirements are of 100,000,000 years of evolution.

                You truly are a dumb motherfucker.

                • by Viol8 ( 599362 )

                  "You have a magical definition of smart that no one else has"

                  Whats yours then?

                  "And what, pray tell, do you think the power requirements are of 100,000,000 years of evolution."

                  You mean the 100M (actually more like 1B but hey, you're not that clued up) that gave rise to humans who designed ANNs? Or did they just pop into existence from nowhere?

                  HOw much power do you think a bee requires to learn its abilities?

                  • smart (adj) : having or showing a quick-witted intelligence.
                    intelligence (n) : the ability to acquire and apply knowledge and skills.

                    Now, we have already agreed on the fact that an ANN (generally) lacks the ability to modify its network during runtime.
                    However, there is no reasonable argument to be made that the training process, and subsequent operation does not demonstrate the ANN "acquiring and applying knowledge and skills"

                    You mean the 100M (actually more like 1B but hey, you're not that clued up) that gave rise to humans who designed ANNs? Or did they just pop into existence from nowhere?

                    1 Bya is before the Cambrian Explosion.
                    It was before there was any kind of ne

                    • by Viol8 ( 599362 )

                      "You think that bee learns to bee after it's born. This is nonsensical."

                      Humans don't learn to "human" either. Some stuff is hard coded in the womb/egg/whatever, but bees DO learn where flowers are and is and their language skills arn't innate either:

                      https://blogs.illinois.edu/vie... [illinois.edu]

                    • Humans don't learn to "human" either. Some stuff is hard coded in the womb/egg/whatever, but bees DO learn where flowers are

                      A bee doesn't learn how to find flowers. A bee learns where flowers are. How they find the flowers is built into their evolutionary NN training.
                      An ANN can accomplish the same feat (indeed, the article we're discussion is literally about that)

                      and is and their language skills arn't innate either:

                      A bee doesn't learn to wiggle its ass. It learns to adapt its ass wiggling NN circuitry until it starts working.
                      An ANN can precisely reproduce this behavior as well.

                      When you can get a bee and an aphid to communicate, then we'll be having a real discussion.

                      You've r

                    • How the fuck does that have anything to do with what we're discussing, you mentally defective shit-for-brains?

                      How long did it take the universe to form the atoms in neurons? Hrr Hrr.
                • > You have a magical definition of smart that no one else has. How exciting!

                  You are not discussing, you are insulting.

                  • Both, actually.
                    Try again.

                    This is how I handle with a hostile person that tries to press an unsubstantiated claims with, "You truly are full of shit."
        • "Every neuron in an animal brain can connect to thousands of other neurons."

          Synaptic connections are not the only way that the brain communicates internally. Neurons communicate with electric fields as well. Physical connections and proximity do not limit connectivity in the human brain the way one would think from just observing synapses.

          Furthermore, the basic processing component of the human brain is not the neuron. The basic processing component of the brain is repeated in each dendrite arm. There a

          • Synaptic connections are not the only way that the brain communicates internally. Neurons communicate with electric fields as well. Physical connections and proximity do not limit connectivity in the human brain the way one would think from just observing synapses.

            I'm highly skeptical of this claim.
            The voltages in the brain pretty much preclude such behavior.
            They definitely communicate electrically via charge carriers (ions) though.

            It sounds to me like you're claiming that disparate parts of the brain communicate via electric fields.
            I would need to see some very solid evidence to believe that. Have you any citations?

            Furthermore, the basic processing component of the human brain is not the neuron. The basic processing component of the brain is repeated in each dendrite arm. There are signal processing gates that communicate along the pathway of the dendrite, and they are analog, transferring information in a range of voltages.

            Indeed. That's why I used connections for all of my scale comparisons, not neurons.

            • Skepticism is good.

              https://cordis.europa.eu/artic... [europa.eu]

              https://www.sciencealert.com/s... [sciencealert.com]

              There are many other articles and observations, and it has been observed in one form or another for more than a decade.

              • https://cordis.europa.eu/artic... [europa.eu]

                The claim here is that the aggregate field (which they measure to be ~mV/mm - considerably larger than I thought it was) could create a background that altered the firing potential of neurons within that field. I can buy that. And while I'd judge your claim as technically true, I'd argue that "communication" is a strong word for that phenomenon. Rather, I'd classify it as a general sensitivity of neurons to the aggregate neuronal activity in their general area.

                https://www.sciencealert.com/s... [sciencealert.com]

                This one claims 2-6mV/mm even more impressive.

      • However I don't mean to imply that evolution created biological brains are the only way to get to intelligence, only that serious differences between these and our artificial creations exist.

        There's nothing wrong with that implication. It's a clear scientific theory which could be proved wrong ("is falsifiable") by creating an electronic brain which was intelligent. The key problem being that we don't have a proper working definition of what being "intelligent" means much beyond "I know it if I see it".There's clearly something pretty deep missing in current "deep learning" compared to actual biological brains and I think the need for back propagation looks like a good hint for the kinds of are

        • There's nothing wrong with that implication.

          Yes, there is.
          There's no scientific evidence to back it up.

          It's a clear scientific theory which could be proved wrong ("is falsifiable") by creating an electronic brain which was intelligent.

          This sentence is confusing.
          The Theory of Inability to Make Artificial Intelligence? I look forward to your citations on that one.
          There is no such theory.

          Some have hypothesized for various reasons, but none with any sound reasoning.

          Right now, the most obvious difference between natural neural networks and their artificial counterparts are matters of scale.
          Our most advanced system, which is capable of some pretty stunning emergent behavior, li

          • by Viol8 ( 599362 )

            "some pretty stunning emergent behavior"

            What, like GPT-3? Put most of the text on the internet through a markov model and it would produce some impressive sounding text.

            "without them knowing they're talking to an artificial neural network"

            Its pretty easy to tell with these systems. Continue a single thread that builds on whats already been said - ie requires historical context of the conversation so far - and watch all these systems fall down hard.

            They're not as smart as they appear, and ironically you're n

            • What, like GPT-3? Put most of the text on the internet through a markov model and it would produce some impressive sounding text.

              Of course it can. Human brain cognition can be modeled very well with Hidden Markov Models.

              Here's where I educate you about the meaning of the word emergent, though.
              There is no explicit programming for a markov model in an ANN.

              Its pretty easy to tell with these systems. Continue a single thread that builds on whats already been said - ie requires historical context of the conversation so far - and watch all these systems fall down hard.

              Indeed. There will always be a test to look for a specific weakness in the system, just as there are tests to identify certain weaknesses in human cognition as well.

              They're not as smart as they appear, and ironically you're not as smart as you think you are.

              You again with your magical definition of smart.

              As for my intelligence? I'm pretty sure it's clear to anyone follow

              • by Viol8 ( 599362 )

                "Of course it can. Human brain cognition can be modeled very well with Hidden Markov Models."

                HMMs can't extrapolate and/or come up with totally original ideas, they just work with the data they've got and mash it up a bit. DItto ANNs.

                • HMMs can't extrapolate and/or come up with totally original ideas, they just work with the data they've got and mash it up a bit. DItto ANNs.

                  There is no evidence that the human brain can come up with a "total original idea".
                  You seem to be implying that brains do something more than just work with the data they've got and mash it up a bit. There is also no evidence of this.

                  • by Viol8 ( 599362 )

                    You can argue the works of shakespeare, music and similar are derived in some way from previous works, but certain mathematical theories people have come up with have little prior art they're genuine originals. When an ANN can do that rather than variations on a theme THEN it'll have intelligence.

                    • Mathematics cannot be a genuine original. What mathematics is modeling is the genuine original. The language of math just provides a way to describe what is observed in a reproducible way.

                    • by Viol8 ( 599362 )

                      "Citation needed.
                      All available evidence is to the contrary."

                      Stafrt with Pythagoras and work from there.

                    • What specifically about Pythagoras was non-evolutionary?
          • You leave me a bit in the position of Devil's advocate, a bit difficult since I have argued against the correctness of exactly these theories on here. I'll do my best.

            The Theory of Inability to Make Artificial Intelligence? I look forward to your citations on that one.
            There is no such theory.

            The exact theory most often expounded is the inability to make a computational model of the mind, which is almost, but not quite, the same thing.

            let's start here. [stanford.edu]

            There are books and books and books about this, including, some of the most famous texts in cognative science such as "The Emperor's New Mind" [wikipedia.org] by Roger Penrose who has actual definite

            • There are books and books and books about this, including, some of the most famous texts in cognative science such as "The Emperor's New Mind" [wikipedia.org] by Roger Penrose who has actual definite claims to be a scientist and presents considerable volumes of evidence in the claim that this is a theory rather than a hypothesis.

              There is only one thing that can be inferred from the Chinese Room argument- and that is the fact that the Turing Test can't tell if something is intelligent.
              Something I think all people generally agree with (particularly since the Turing test is long since beaten at this point)

              The broader interpretation by some philosophers that the Chinese Room argument that is somehow refutes the idea that the brain is more than programmatic is based on fallacious reasoning.
              In particular, it literally makes the base

              • A good working definition of free will is the ability to use recursion. It fits the emergent phenomena we see in the universe, in that it has scale independent symmetry. By shifting perspective we can use the same computational power we have always used to look at a smaller or larger viewpoints of the same concept. And we can do so infinitely in either direction.

                Furthermore, humans, at least, can use their own cognition to "re-wire" their own brain, creating our own stimulus internally, from which to lear

                • A good working definition of free will is the ability to use recursion.

                  Is it? Because I can write software that does that...

                  Furthermore, humans, at least, can use their own cognition to "re-wire" their own brain, creating our own stimulus internally, from which to learn and remodel our own brain structure.

                  Indeed, they can. But conceptually speaking- this isn't difficult, even for the artificial variety. What's difficult there is getting a structure that rewires itself in a way that's helpful.
                  That's the origin of my claim that it appears that even neuroplasticity is an emergent quality of biological neural networks (i.e., we evolved the ability to cognitively rewire), since we are not the only creatures that do it, and not all creatures with "brains" do it

              • There is only one thing that can be inferred from the Chinese Room argument- and that is the fact that the Turing Test can't tell if something is intelligent.

                I personally don't think that the Chinese Room argument has any real scientific basis. It's an angels on pinheads kind of thing, which basically starts from the (hidden) supposition that there is such a thing as a soul, machines don't have it and so machines can't be intelligent. There's a deep sophistry in the statement that the intelligence can't be in the system.

                the Turing Test can't tell if something is intelligent.

                The Turing test has some useful ideas. The original version does fail since there's not enough specificity. For the Turing test to work nowadays

                • The Turing test has some useful ideas. The original version does fail since there's not enough specificity. For the Turing test to work nowadays, you need to have a trained tester who knows how to test for key. I would argue that that is still useful and it's failings are different from the ones people think they are. Systems like GPT-3 can still be called out by someone who understands them and in searching for that way of calling them out, we are finding key new things that differentiate real intelligence from simulacrum.

                  Oh I agree entirely on its usefulness.
                  But it suffers from the one conclusion you can really glean from the Chinese Room argument- no turing test can prove intelligence. You cannot disprove simulation.

                  Knowing a system, you can design more and more sophisticated tests to find deviations from a human, but you can, at the same time, devise more and more sophisticated tests to prove humans aren't intelligent using the same fallacious reasoning.

                  The turing test is useful. A test of intelligence it is not.

                  I

          • by jvkjvk ( 102057 )

            >Right now, the most obvious difference between natural neural networks and their artificial counterparts are matters of scale.

            Lol, no. Right now, the MOST obvious difference is that they don't work ANYTHING alike. You can claim numbers all you want but if the basic unit doesn't even represent the same thing in each system, THATS the biggest difference.

            • Lol, no. Right now, the MOST obvious difference is that they don't work ANYTHING alike. You can claim numbers all you want but if the basic unit doesn't even represent the same thing in each system, THATS the biggest difference.

              The basic unit does not need to represent the same thing in each system.
              One is an alternative implementation of the other.

              When discussion disparity between the two implementations, the details of unit implementation are not important.
              So, as said, the primary difference in the implementations is scale.
              When scale parity is reached, then we can discuss whether or not the individual calculating apparatuses matter.

            • If I had not already commented I would mod you up.

              I think the discussion of how the two models converge and are similar and endlessly comparing them with misleading metrics and definitions is a masturbatory and egocentric way of completely missing the opportunity they present. It is where the two systems diverge, and how they differ that can present us with something novel.

              They are another set of eyes that see differently, and in doing so can pick up things we cannot. But they are not so different that th

              • I think the discussion of how the two models converge and are similar and endlessly comparing them with misleading metrics and definitions is a masturbatory and egocentric way of completely missing the opportunity they present. It is where the two systems diverge, and how they differ that can present us with something novel.

                I don't implicitly disagree with this statement.
                The differences between the systems are as much a part of their power as their similarity.
                Ultimately, artificial systems have the potential to form superior networks with superior neuronal functionality. They're not at that point yet, of course. Not even close. Neurons are still vastly more functional than the simplified parameterization of artificial networks.

                They are another set of eyes that see differently, and in doing so can pick up things we cannot. But they are not so different that their descriptions are completely inscrutable.

                I think I'd agree with this assessment entirely.

                Artificial networks do not seek to be a genuine r

    • Re:Back propagation (Score:4, Informative)

      by narcc ( 412956 ) on Friday November 04, 2022 @08:09AM (#63023931) Journal

      No.

      Back propagation is absolutely not necessary. The only reason we use it is that it's often faster than other methods, but it is in no way essential. As far as the actual operation of the NN after training, back propagation is completely irrelevant.

      NNs are absolutely nothing like the human brain. The comparison was stretched from the very beginning and it's absolutely absurd that it has survived this long. It comes really close to outright fraud.

      Here's an interesting fact about ordinary feed-forward neural networks that you probably don't know: they're not Turing complete. They have very little computational power. In fact, any such network can be conceptually reduced to a lookup table as all they can do is map input states to output states. Not very exciting, is it? Still, this is the same kind of network used in those text-to-drawing programs! You can get a lot of mileage out of them, but they're not magic.

      As far as the NN analogy goes, it's trivial to come up with an equivalent structure that doesn't look anything like a poor-mans idea of a brain. Give it a try and be impressed with yourself. Odds are good that you'll even accidentally come up with something with more computational power.

      As for the rest, you might be interested in Lokhorst's somewhat famous paper Why I Am Not a Super Turing Machine [gjclokhorst.nl]. It's a short and easy read that seems to be exactly the sort of thing you'd be interested in.

      • NNs are absolutely nothing like the human brain.

        They're approximations of biological neural networks, to varying degrees of faithfulness. To say, "absolutely nothing like" is, to quote someone, "really close to outright fraud."

        Here's an interesting fact about ordinary feed-forward neural networks that you probably don't know: they're not Turing complete.

        Unsure how that's relevant.
        There's no evidence that the human brain is, either.

        Can a sequence of neurons be made turing complete? Maybe? Almost certainly?
        Can an artificial neural network be made turing complete? Absolutely. Trivially, in fact.
        In the strictest sense, a feed-forward network isn't, but that's simply because "feed

        • Turing completeness isn't provable for a human brain, nor is it needed to model one.

          Errrrr.... what? It's actually pretty trivial to prove that a human brain is Turing Complete. All you have to do is explain to someone how Turing Machines work and ask them how they'd simulate the running of an arbitrary TM with a given input. If they come up with an answer then they have proven that their brain is Turing Complete.

          • Errrrr.... what? It's actually pretty trivial to prove that a human brain is Turing Complete. All you have to do is explain to someone how Turing Machines work and ask them how they'd simulate the running of an arbitrary TM with a given input.

            This is untrue. Describing the function of something is not simulating it.
            They must be able to actually simulate it. Further, they must be able to simulate all turing computatable functions, in any arbitrary arrangement.

            At first glance, it's easy to say, "of course a human could do that", and a human with a piece of pen and paper certainly could.
            But there's no evidence I can think of that suggest that the human brain can faithfully simulate such a thing. It simply doesn't store arbitrarily symbolic info

            • At first glance, it's easy to say, "of course a human could do that", and a human with a piece of pen and paper certainly could.

              That's all you need to establish that a brain is Turing Complete really. For something to be Turing Complete it just has to be able to emulate the actions of some description of a TM with some input. That's it. The brain figures out how to do the emulation, and in this example the brain calls upon the body to control pen and paper to keep track of the state. Whether or not the act

              • That's all you need to establish that a brain is Turing Complete really. For something to be Turing Complete it just has to be able to emulate the actions of some description of a TM with some input. That's it.

                To be turing-complete, it must be able to simulate the function of every possible turing computation to infinite scale, in principle.
                The fact that the human brain performs, at best, as an unbounded non-deterministic turing machine strongly indicates it cannot be turing complete.

                To prove either way is probably impossible, though.

                It's also very easy to demonstrate that the brain is not turing complete in all conditions.
                For example, was a brain turing complete before writing was discovered? Was it turin

                • To be turing-complete, it must be able to simulate the function of every possible turing computation to infinite scale, in principle.

                  And other than a person getting bored, or forgetting something, or losing track of where they are, or running out of space or life, they can. Again, this is all that's required.

                  Sorry, but this is a math problem that you're thinking of like an engineering problem. It's just. Not. The point.

                  I've heard other people claim that if you can simulate any turing machine, you're a t

  • by davide marney ( 231845 ) on Friday November 04, 2022 @05:34AM (#63023675) Journal

    Some are just more usefully wrong than others.

    • Only a tiny fraction of ANN's are created as brain models in the first place. The vast majority are models of language, or models of imagery, etc. - and of course they are wrong models of those things to a degree, just as the human brain is a wrong model of the world we live in. But nobody thinks of AlphaZero or GPT3 etc. as brain models in the first place.
      • But nobody thinks of AlphaZero or GPT3 etc. as brain models in the first place.

        I think you'll find a thread just above where a poster is positing more or less exactly that.

        • Well, objectively the VAST majority of money spent on developing neural nets is devoted to improving performance on tasks of interest such as recognizing or generating imagery or text, and the benchmarks for those tasks make no specific reference to brain similarity. The niche of neuroscience within NNs is small. NNs were not of great interest until they started outperforming the previous incumbents at these tasks (e.g. SVM, HMM, and so on). To me this is the strongest argument of what ANNs are "su
  • Every technological innovation has led to theories of how the brain works. First, it was hydraulics because of aquifer technology, then there were the telegraph models of how the brain works, and now computer models. But the CNS and nervous system are like all organ systems, they are a part of a living organism and not just some box in the corner of the lab running calculations. When you study in detail how the brain works it becomes obvious that being a "living organism" is something very different than a

    • Based on what I've seen and read, the closest I can come to would be something like an FPGA, but that's totally analog (like the analog comptuers of old, using op-amps and other analog components), and reconfigurable on-the-fly. It would also of course have to be on a massive scale.
    • by cstacy ( 534252 )

      Computers can model things, but they aren't those things and never will be like them. They work on different principles entirely.

      Taken literally, that's obviously false, since a brain is a "computer" of some sort. And software is infinitely malleable, even when running on digital computers. We can "simulate" or "model" anything, including the same exact processes of a brain. In which case there is no functional difference between them.

      But I assume you mean: "The software we know how to create today will never be like a (e.g. human) brain".
      That's true, since we don't know nearly enough about the brain to simulate or even model it. And

  • We have been describing the brain using the technology of the day. Consider, the rise in the use of the word "multitasking" as a description for human activity came into prominence after we started using the word multitasking to describe computer operations. Even humans do not multitask well at all, we still use the word. It is drawing from the technology of the day.

    In older, 18th, and 19th books the brain is illustrated with images of steam engines and gears. Again, the dominant technology of the day.

    Y
    • by lurcher ( 88082 )

      "Even humans do not multitask well at all"

      Not true, I am successfully typing this post while breathing, regulating my body temperature, hearing sounds in the background, and any number of other tasks.

      • by Hasaf ( 3744357 )
        Yes, I should have added, ". . . while performing high cognition activities." The obvious one is talking on the phone while driving. The research has shown that using hands free does not significantly reduce the impairment. The reason is simple, in most cases, the very conversation is a high cognition activity.

        Another example closer to where I am, a student just walked up and asked me a question about a variable in a program. I clearly needed to stop typing and answer his question. Not to try to perform
  • “Researchers urge caution when comparing vanillin to actual vanilla extract.”
  • ...counts for computation in the brain. Add hormones. Add various organ interconnections. Add gut flora.

    Human cognition ultimately will have to be modeled at the atomic level, for most of an entire body (you can probably simulate the body of a quadriplegic, and skip arms and legs).

    Biology is doing something way more complex than we imagine.

  • AI is missing some crucial part of how the human mind works, and has been for decades.
    Making neural networks and other simulations run ever faster or trying to throw a database of facts at an AI was never going to substitute for that missing part.

    Do AI researchers recognise this, or do they just think they need to train their models better ?
  • In other words: we have no idea how brains really work, and the so-called 'AI' they keep trotting out really is, overall, crap. Just like I've been saying.
    Try again, humans. Maybe you'll get it right one of these centuries.
    • That's not totally fair. We may not get to general intelligence, but we have already got all kinds of useful things out of AI research. DALL-E does some really amazing stuff that people believed were impossible a few years ago. (Almost) self driving Teslas are pretty amazing. Reading hand written letters is pretty amazing.

  • "Neural networks" have nothing to do with the brain. Just throw caution to the wind, and reject any claims to the contrary by PR flacks and so-called know-nothing "journalists."

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...