Study Urges Caution When Comparing Neural Networks To the Brain (mit.edu) 167
Anne Trafton writes via MIT News: Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis. In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.
In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells -- key components of the brain's navigation system -- the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. "What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices," says Rylan Schaeffer, a former senior research associate at MIT. Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works. Mikail Khona, an MIT graduate student in physics, is also an author. "When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing," says Ila Fiete, the senior author of the paper and professor of brain and cognitive sciences at MIT.
"Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model," adds Mikail Khona, an MIT graduate student in physics who is also an author. "If you use the correct constraints, then the models can give you a brain-like solution."
In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells -- key components of the brain's navigation system -- the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. "What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices," says Rylan Schaeffer, a former senior research associate at MIT. Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works. Mikail Khona, an MIT graduate student in physics, is also an author. "When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing," says Ila Fiete, the senior author of the paper and professor of brain and cognitive sciences at MIT.
"Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model," adds Mikail Khona, an MIT graduate student in physics who is also an author. "If you use the correct constraints, then the models can give you a brain-like solution."
Back propagation (Score:5, Insightful)
This is the fundamental low level operating principle of ANNs. However as far as anyone can tell the human brain doesn't use it at all so comparing ANNs to the human brain is like comparing a digital computer to an analogue one. They may produce similar results now and then but they have little else in common other than a few important components: neurons and transistors.
Re:Back propagation (Score:5, Informative)
It is how you train ANNs.
Obviously, neuroplasticity has very few applicable analogues between brain matter and ANNs.
Evolution of course also has its own training regime that's a bit wasteful for ANN training purposes.
This article isn't about that.
This article is about unconscious bias in researchers, and warning them against it.
Researchers, being unconsciously biased by the knowledge of how grid cells worked, inadvertently steered ANNs to form grid cell structures, claiming that they were a natural outcome of all path integration training.
MIT showed that they're actually the rarest outcome, unless you steer training to produce other structures with no biological analogues (single-location sensitive place cells).
I.e., path-integration training without bias injection normally creates a path integration network that works nothing like the human brain's.
The results, however, are the same.
Re: (Score:3, Interesting)
"Backpropagation is not the fundamental low level operating principle of ANNs.
It is how you train ANNs."
Thats like saying the ability to learn isn't a fundemental property of a brain when its probably its most important ability.
But if you must split hairs then another difference between a brain and an ANN is a brain can continue learn while it operates normally, ANNs in general cannot.
Re:Back propagation (Score:5, Informative)
Thats like saying the ability to learn isn't a fundemental property of a brain when its probably its most important ability.
It's not like that at all.
That, in fact, is probably the most fundamental difference between NNs and ANNs- the lack of neuroplasticity, but that just further outlines the fact that backpropagation is not a fundamental low level operating principle of ANNs.
Once an ANN is "trained" (which in an NN does not mean trained from information (though it can)), no further backpropagation occurs.
When training an ANN, you're not just trying to give it "life experience" that a NN would experience. You're also giving it the training of several hundred million years of evolution. Backpropagation is just how that's done in a sane fashion. The neurons don't operate on that principle at all. The selection of their weights does.
But if you must split hairs then another difference between a brain and an ANN is a brain can continue learn while it operates normally, ANNs in general cannot.
Absolutely.
But the brain's ability to learn is not a fundamental operating principle of it.
Different animals have evolved differing levels of learning capacity and neuroplasticity, and brains operate across the entire imaginable spectrum.
Plenty of NNs within your brain have no learned input whatsoever. That's because neuroplasticity is in fact itself trained by evolution.
Re: (Score:2)
"You've gotten rude. You're starting to feel embarrassed."
Don't get patronising sonny, just makes you sound like a little oick. I'm not wrong - you are. Learning is a fundamental function of biological brains and you saying otherwise doesn't change that fact.#
"Learning is part of the emergent functionality of the brain,"
No more so than learning in ANNs because thats how they're both built.
"Claiming a thing does not make it true."
You'd know all about that.
"What? lol."
Its plain english, try reading it again.
"
Re: (Score:2)
Don't get patronising sonny, just makes you sound like a little oick. I'm not wrong - you are. Learning is a fundamental function of biological brains and you saying otherwise doesn't change that fact.#
You can't back up that assertion.
No more so than learning in ANNs because thats how they're both built.
You're conflating different kinds of learning.
The training of an ANN more closely mimics evolution than neuroplasticity.
That's a pretty elementary mistake.
You'd know all about that.
Uh, "good one?"
Its plain english, try reading it again.
It's plain English alright, but it's devoid of any substance.
"Go learn biology, bro"
Given it has to follow the laws of physics then yes. But ALL neurons can change their weightings, none are fixed.
Another stupid statement devoid of substance.
Neurons do not have simple weightings.
The capacity of any individual neuron to change the characteristics of their polarization potentials is based upon the epigenetics of tha
Re: (Score:2)
"You can't back up that assertion."
Any bilogist can and so can basic scientific fact - without learning biological brains are useless lumps of matter.
"Some are highly plastic, some aren't at all."
They're all plastic to some degree.
"ou have lost this argument utterly and completely."
No son. I called you out on your BS and you can't handle it.
Re: (Score:2)
Any bilogist can and so can basic scientific fact - without learning biological brains are useless lumps of matter.
You think... that brains learn how to beat your heart?
They're all plastic to some degree.
You cannot back up this assertion.
No son. I called you out on your BS and you can't handle it.
You have made assertions that cannot be supported, and you have conflated very basic concepts in neurocognition to make predictably bad conclusions from. That is not calling someone out on their bullshit.
Re:Back propagation (Score:4, Insightful)
Get a room you two.
It's an informative argument to watch, although it would be better without the vitriol.
Re: (Score:2)
although it would be better without the vitriol.
Agreed. I was unnecessarily triggered by the "No shit, sherlock" comment.
We pretty much failed to reel it back in from there.
Re: (Score:2)
It's not a controversial claim. I think most people would agree, at least for humans.
Now that's a pretty controversial claim. A single human's DNA is about 725 MB. Current NN language models are around 1T p
Re: (Score:2)
This is definitely true for any father of toddlers!
Re: (Score:2)
Lots of people talk about shit they don't know anything about.
You are one of them. You've embarrassed yourself in more than one NN thread.
Actually, neither one of you has a clue. It's like watching a couple middle school kids argue about quantum mechanics.
Re: (Score:3)
Given I've studied real neurons I have more than a clue than him and I suspect you.
Re: (Score:2)
Whatever you know about neurons doesn't really matter, does it? The topic here is neural networks, and it's pretty obvious that you don't know shit about those.
Oh, and you really don't want to play the credential game with me. I will easily win. But good on you for trying to bully me into thinking that your mindless ranting was in some way authoritative. It lets me know exactly how much weight to give your "opinions" in the future.
Re: (Score:2)
Otherwise, your trolling is unimpressive.
Re: (Score:2)
Just one? Okay.
They're [neural networks] approximations of biological neural networks, to varying degrees of faithfulness.
Now go be completely ignorant somewhere else. Some idiot might repeat your nonsense.
Re: (Score:2)
Totally agree about trying to keep this debate civil. That's very difficult for some though because this stuff gets pretty close to peoples
Neuroplasticity in the brain is limited.
What life experience do you think trains your medulla oblongata?
Unicellular organisms, and not just slime moulds, are able to learn. It's pretty clear that animals that have brains that haven't really extended beyond the common ancestor that came up with the medulla oblongata (reptiles e.g.) are also able to learn. Even if we just accepted the theory that the human medulla oblongata cannot learn, for the sake of argument, could that
Re: (Score:2)
That's very difficult for some though because this stuff gets pretty close to peoples ..
sorry, didn't finish that sentence. That's very difficult for some though because this stuff gets pretty close to peoples feelings about self, especially religious feelings. It might be good to try to be a bit tolerant and calm and help the debate move towards civility by not overreacting to every slight.
Re: (Score:2)
Unicellular organisms, and not just slime moulds, are able to learn.
That's a bit of a stretch.
What they can do is habituate- which is something biology has had handled for the last 3 billion years.
Receptors all over the body regulate automatically (for better or worse) to account for the amount of signaling chemicals in their environment.
I'm not sure it's appropriate to call that learning.
I will grant you that it's debatable though.
It's pretty clear that animals that have brains that haven't really extended beyond the common ancestor that came up with the medulla oblongata (reptiles e.g.) are also able to learn.
Of course they are.
The discussion wasn't about whether or not animals containing non-learning structures can learn.
Even if we just accepted the theory that the human medulla oblongata cannot learn, for the sake of argument, could that not be a specific evolution in humans where the ability to learn that used to exist there has been lost and replaced with learning in the higher parts of the brain.
Sure could.
However,
Re: (Score:2)
There are animal brains out there that might not ever learn or form memories, being prewired for nearly all, perhaps all behaviour.
(simpler insects are an example)
Re: (Score:2)
There are animal brains out there that might not ever learn or form memories, being prewired for nearly all, perhaps all behaviour.
(simpler insects are an example)
That's quite likely true, but, given that unicellular life has the ability to learn, it's very possible that these insects have evolved from ancestors that had the ability to learn and that then they have lost it in return for a much more specific and specialised set of behaviours arising from their genetic "programming"?
If true, it wouldn't take away from the importance of your suggestion that learning is not always the most important thing, but, just as flight is fundamental to birds but flightless birds
Re: (Score:2)
Perhaps it will at least get people thinking about what the term "fundamental" means in this context.
Re: (Score:3)
That's exactly what I wanted to say. Back propagation is used because it works far better than anything else on modern electronic hardware. It's not biologically plausible at all. There were attempts to use Hebbian learning but it's just not practical on modern electrical computers or GPUs.
There are a few other blatant differences between deep learning and real evolution produced biological brains:
Digital electronics uses discreet time steps so all changes happen at the same time. Animal brains definitely d
Re: (Score:2)
That's exactly what I wanted to say. Back propagation is used because it works far better than anything else on modern electronic hardware. It's not biologically plausible at all. There were attempts to use Hebbian learning but it's just not practical on modern electrical computers or GPUs.
Absolutely. But that's also not relevant to the article.
How a network is trained is not an underlying principle of the network's operation.
There are a few other blatant differences between deep learning and real evolution produced biological brains:
Oh, there are practically infinite differences. That's not terribly relevant. ;)
A predictive model rarely exactly mimics that which it models
Digital electronics uses discreet time steps so all changes happen at the same time. Animal brains definitely don't have coordinated time steps.
That's not a problem at all.
Physics in general operates without quantized time, and yet we can simulate the universe to ridiculous precision.
Deep learning loves one-hot encoding and having neurons actually mean something, Animal brains are much more sub-symbolic and tolerant of individual neuron failure. Neurons don't mean things in animal brains, patterns of firing do.
Nonsense.
Animal brains are more tolerant of individual neuron failure because they have
Re: (Score:2)
"The largest ANN in existence is GPT-3, and it's still 3 orders of magnitude away from a human brain. 1,000 times smaller."
Want to have a guess how many neurons a bee has? You know, those little insects that can visually communicate with each other and navigate miles to and from their hive?
Biological brains are far more powerful not just in practice but in principle than ANNs.
Re: (Score:2)
Want to have a guess how many neurons a bee has? You know, those little insects that can visually communicate with each other and navigate miles to and from their hive?
Are you going to try to claim that GPT-3 isn't smarter than a bee?
How many bees that you're aware of can compose language that passes the Turing test?
A bee uses somewhere around 14 billion parameters to do its job.
An ANN tuned to do its job could do it with 1-2 orders of magnitude less.
ANNs are nearly always more efficient at their job than an NN, because they've been specifically trained to do that job.
Nature has no training mechanism as efficient as backpropagation.
Re: (Score:2)
"Are you going to try to claim that GPT-3 isn't smarter than a bee?"
GPT-3 isn't smart at all and anyone who claims otherwise need to lay off the KoolAid. Its a statistical regurgitator merged with a clever parser.
"ANNs are nearly always more efficient at their job than an NN, because they've been specifically trained to do that job."
Define efficient. Run them on biological hardware and they'd crawl.
"Nature has no training mechanism as efficient as backpropagation"
You truly are full of shit. Do check out the
Re: (Score:2)
GPT-3 isn't smart at all and anyone who claims otherwise need to lay off the KoolAid. Its a statistical regurgitator merged with a clever parser.
I suspected this was the root of this discussion for you.
You have a magical definition of smart that no one else has. How exciting!
Define efficient. Run them on biological hardware and they'd crawl.
Work done on a per-neuron-analogue basis.
You truly are full of shit. Do check out the power requirements for training an ANN.
And what, pray tell, do you think the power requirements are of 100,000,000 years of evolution.
You truly are a dumb motherfucker.
Re: (Score:3)
"You have a magical definition of smart that no one else has"
Whats yours then?
"And what, pray tell, do you think the power requirements are of 100,000,000 years of evolution."
You mean the 100M (actually more like 1B but hey, you're not that clued up) that gave rise to humans who designed ANNs? Or did they just pop into existence from nowhere?
HOw much power do you think a bee requires to learn its abilities?
Re: (Score:2)
intelligence (n) : the ability to acquire and apply knowledge and skills.
Now, we have already agreed on the fact that an ANN (generally) lacks the ability to modify its network during runtime.
However, there is no reasonable argument to be made that the training process, and subsequent operation does not demonstrate the ANN "acquiring and applying knowledge and skills"
You mean the 100M (actually more like 1B but hey, you're not that clued up) that gave rise to humans who designed ANNs? Or did they just pop into existence from nowhere?
1 Bya is before the Cambrian Explosion.
It was before there was any kind of ne
Re: (Score:2)
"You think that bee learns to bee after it's born. This is nonsensical."
Humans don't learn to "human" either. Some stuff is hard coded in the womb/egg/whatever, but bees DO learn where flowers are and is and their language skills arn't innate either:
https://blogs.illinois.edu/vie... [illinois.edu]
Re: (Score:2)
Humans don't learn to "human" either. Some stuff is hard coded in the womb/egg/whatever, but bees DO learn where flowers are
A bee doesn't learn how to find flowers. A bee learns where flowers are. How they find the flowers is built into their evolutionary NN training.
An ANN can accomplish the same feat (indeed, the article we're discussion is literally about that)
and is and their language skills arn't innate either:
A bee doesn't learn to wiggle its ass. It learns to adapt its ass wiggling NN circuitry until it starts working.
An ANN can precisely reproduce this behavior as well.
When you can get a bee and an aphid to communicate, then we'll be having a real discussion.
You've r
Re: (Score:2)
How long did it take the universe to form the atoms in neurons? Hrr Hrr.
Re: Back propagation (Score:2)
> You have a magical definition of smart that no one else has. How exciting!
You are not discussing, you are insulting.
Re: (Score:2)
Try again.
This is how I handle with a hostile person that tries to press an unsubstantiated claims with, "You truly are full of shit."
Re: (Score:2)
"Every neuron in an animal brain can connect to thousands of other neurons."
Synaptic connections are not the only way that the brain communicates internally. Neurons communicate with electric fields as well. Physical connections and proximity do not limit connectivity in the human brain the way one would think from just observing synapses.
Furthermore, the basic processing component of the human brain is not the neuron. The basic processing component of the brain is repeated in each dendrite arm. There a
Re: (Score:2)
Synaptic connections are not the only way that the brain communicates internally. Neurons communicate with electric fields as well. Physical connections and proximity do not limit connectivity in the human brain the way one would think from just observing synapses.
I'm highly skeptical of this claim.
The voltages in the brain pretty much preclude such behavior.
They definitely communicate electrically via charge carriers (ions) though.
It sounds to me like you're claiming that disparate parts of the brain communicate via electric fields.
I would need to see some very solid evidence to believe that. Have you any citations?
Furthermore, the basic processing component of the human brain is not the neuron. The basic processing component of the brain is repeated in each dendrite arm. There are signal processing gates that communicate along the pathway of the dendrite, and they are analog, transferring information in a range of voltages.
Indeed. That's why I used connections for all of my scale comparisons, not neurons.
Re: Back propagation (Score:2)
Skepticism is good.
https://cordis.europa.eu/artic... [europa.eu]
https://www.sciencealert.com/s... [sciencealert.com]
There are many other articles and observations, and it has been observed in one form or another for more than a decade.
Re: (Score:2)
https://cordis.europa.eu/artic... [europa.eu]
The claim here is that the aggregate field (which they measure to be ~mV/mm - considerably larger than I thought it was) could create a background that altered the firing potential of neurons within that field. I can buy that. And while I'd judge your claim as technically true, I'd argue that "communication" is a strong word for that phenomenon. Rather, I'd classify it as a general sensitivity of neurons to the aggregate neuronal activity in their general area.
https://www.sciencealert.com/s... [sciencealert.com]
This one claims 2-6mV/mm even more impressive.
Re: Back propagation (Score:2)
I lost a whole set of links to peer reviewed papers when i changed phones. Been trying to get them back by searching but the keywords are obscured by a cloud of foo.
Heres another one that points out some interesting effects:
https://www.sciencealert.com/n... [sciencealert.com]
Re: (Score:2)
However I don't mean to imply that evolution created biological brains are the only way to get to intelligence, only that serious differences between these and our artificial creations exist.
There's nothing wrong with that implication. It's a clear scientific theory which could be proved wrong ("is falsifiable") by creating an electronic brain which was intelligent. The key problem being that we don't have a proper working definition of what being "intelligent" means much beyond "I know it if I see it".There's clearly something pretty deep missing in current "deep learning" compared to actual biological brains and I think the need for back propagation looks like a good hint for the kinds of are
Re: (Score:2)
There's nothing wrong with that implication.
Yes, there is.
There's no scientific evidence to back it up.
It's a clear scientific theory which could be proved wrong ("is falsifiable") by creating an electronic brain which was intelligent.
This sentence is confusing.
The Theory of Inability to Make Artificial Intelligence? I look forward to your citations on that one.
There is no such theory.
Some have hypothesized for various reasons, but none with any sound reasoning.
Right now, the most obvious difference between natural neural networks and their artificial counterparts are matters of scale.
Our most advanced system, which is capable of some pretty stunning emergent behavior, li
Re: (Score:2)
"some pretty stunning emergent behavior"
What, like GPT-3? Put most of the text on the internet through a markov model and it would produce some impressive sounding text.
"without them knowing they're talking to an artificial neural network"
Its pretty easy to tell with these systems. Continue a single thread that builds on whats already been said - ie requires historical context of the conversation so far - and watch all these systems fall down hard.
They're not as smart as they appear, and ironically you're n
Re: (Score:2)
What, like GPT-3? Put most of the text on the internet through a markov model and it would produce some impressive sounding text.
Of course it can. Human brain cognition can be modeled very well with Hidden Markov Models.
Here's where I educate you about the meaning of the word emergent, though.
There is no explicit programming for a markov model in an ANN.
Its pretty easy to tell with these systems. Continue a single thread that builds on whats already been said - ie requires historical context of the conversation so far - and watch all these systems fall down hard.
Indeed. There will always be a test to look for a specific weakness in the system, just as there are tests to identify certain weaknesses in human cognition as well.
They're not as smart as they appear, and ironically you're not as smart as you think you are.
You again with your magical definition of smart.
As for my intelligence? I'm pretty sure it's clear to anyone follow
Re: (Score:2)
"Of course it can. Human brain cognition can be modeled very well with Hidden Markov Models."
HMMs can't extrapolate and/or come up with totally original ideas, they just work with the data they've got and mash it up a bit. DItto ANNs.
Re: (Score:2)
HMMs can't extrapolate and/or come up with totally original ideas, they just work with the data they've got and mash it up a bit. DItto ANNs.
There is no evidence that the human brain can come up with a "total original idea".
You seem to be implying that brains do something more than just work with the data they've got and mash it up a bit. There is also no evidence of this.
Re: (Score:2)
You can argue the works of shakespeare, music and similar are derived in some way from previous works, but certain mathematical theories people have come up with have little prior art they're genuine originals. When an ANN can do that rather than variations on a theme THEN it'll have intelligence.
Re: (Score:2)
Mathematics cannot be a genuine original. What mathematics is modeling is the genuine original. The language of math just provides a way to describe what is observed in a reproducible way.
Re: (Score:2)
"Citation needed.
All available evidence is to the contrary."
Stafrt with Pythagoras and work from there.
Re: (Score:2)
Re: (Score:2)
You leave me a bit in the position of Devil's advocate, a bit difficult since I have argued against the correctness of exactly these theories on here. I'll do my best.
The Theory of Inability to Make Artificial Intelligence? I look forward to your citations on that one.
There is no such theory.
The exact theory most often expounded is the inability to make a computational model of the mind, which is almost, but not quite, the same thing.
let's start here. [stanford.edu]
There are books and books and books about this, including, some of the most famous texts in cognative science such as "The Emperor's New Mind" [wikipedia.org] by Roger Penrose who has actual definite
Re: (Score:2)
There are books and books and books about this, including, some of the most famous texts in cognative science such as "The Emperor's New Mind" [wikipedia.org] by Roger Penrose who has actual definite claims to be a scientist and presents considerable volumes of evidence in the claim that this is a theory rather than a hypothesis.
There is only one thing that can be inferred from the Chinese Room argument- and that is the fact that the Turing Test can't tell if something is intelligent.
Something I think all people generally agree with (particularly since the Turing test is long since beaten at this point)
The broader interpretation by some philosophers that the Chinese Room argument that is somehow refutes the idea that the brain is more than programmatic is based on fallacious reasoning.
In particular, it literally makes the base
Re: (Score:2)
A good working definition of free will is the ability to use recursion. It fits the emergent phenomena we see in the universe, in that it has scale independent symmetry. By shifting perspective we can use the same computational power we have always used to look at a smaller or larger viewpoints of the same concept. And we can do so infinitely in either direction.
Furthermore, humans, at least, can use their own cognition to "re-wire" their own brain, creating our own stimulus internally, from which to lear
Re: (Score:2)
A good working definition of free will is the ability to use recursion.
Is it? Because I can write software that does that...
Furthermore, humans, at least, can use their own cognition to "re-wire" their own brain, creating our own stimulus internally, from which to learn and remodel our own brain structure.
Indeed, they can. But conceptually speaking- this isn't difficult, even for the artificial variety. What's difficult there is getting a structure that rewires itself in a way that's helpful.
That's the origin of my claim that it appears that even neuroplasticity is an emergent quality of biological neural networks (i.e., we evolved the ability to cognitively rewire), since we are not the only creatures that do it, and not all creatures with "brains" do it
Re: (Score:2)
Re: (Score:2)
There is only one thing that can be inferred from the Chinese Room argument- and that is the fact that the Turing Test can't tell if something is intelligent.
I personally don't think that the Chinese Room argument has any real scientific basis. It's an angels on pinheads kind of thing, which basically starts from the (hidden) supposition that there is such a thing as a soul, machines don't have it and so machines can't be intelligent. There's a deep sophistry in the statement that the intelligence can't be in the system.
the Turing Test can't tell if something is intelligent.
The Turing test has some useful ideas. The original version does fail since there's not enough specificity. For the Turing test to work nowadays
Re: (Score:2)
The Turing test has some useful ideas. The original version does fail since there's not enough specificity. For the Turing test to work nowadays, you need to have a trained tester who knows how to test for key. I would argue that that is still useful and it's failings are different from the ones people think they are. Systems like GPT-3 can still be called out by someone who understands them and in searching for that way of calling them out, we are finding key new things that differentiate real intelligence from simulacrum.
Oh I agree entirely on its usefulness.
But it suffers from the one conclusion you can really glean from the Chinese Room argument- no turing test can prove intelligence. You cannot disprove simulation.
Knowing a system, you can design more and more sophisticated tests to find deviations from a human, but you can, at the same time, devise more and more sophisticated tests to prove humans aren't intelligent using the same fallacious reasoning.
The turing test is useful. A test of intelligence it is not.
I
Re: (Score:2)
>Right now, the most obvious difference between natural neural networks and their artificial counterparts are matters of scale.
Lol, no. Right now, the MOST obvious difference is that they don't work ANYTHING alike. You can claim numbers all you want but if the basic unit doesn't even represent the same thing in each system, THATS the biggest difference.
Re: (Score:2)
Lol, no. Right now, the MOST obvious difference is that they don't work ANYTHING alike. You can claim numbers all you want but if the basic unit doesn't even represent the same thing in each system, THATS the biggest difference.
The basic unit does not need to represent the same thing in each system.
One is an alternative implementation of the other.
When discussion disparity between the two implementations, the details of unit implementation are not important.
So, as said, the primary difference in the implementations is scale.
When scale parity is reached, then we can discuss whether or not the individual calculating apparatuses matter.
Re: (Score:2)
If I had not already commented I would mod you up.
I think the discussion of how the two models converge and are similar and endlessly comparing them with misleading metrics and definitions is a masturbatory and egocentric way of completely missing the opportunity they present. It is where the two systems diverge, and how they differ that can present us with something novel.
They are another set of eyes that see differently, and in doing so can pick up things we cannot. But they are not so different that th
Re: (Score:2)
I think the discussion of how the two models converge and are similar and endlessly comparing them with misleading metrics and definitions is a masturbatory and egocentric way of completely missing the opportunity they present. It is where the two systems diverge, and how they differ that can present us with something novel.
I don't implicitly disagree with this statement.
The differences between the systems are as much a part of their power as their similarity.
Ultimately, artificial systems have the potential to form superior networks with superior neuronal functionality. They're not at that point yet, of course. Not even close. Neurons are still vastly more functional than the simplified parameterization of artificial networks.
They are another set of eyes that see differently, and in doing so can pick up things we cannot. But they are not so different that their descriptions are completely inscrutable.
I think I'd agree with this assessment entirely.
Artificial networks do not seek to be a genuine r
Re:Back propagation (Score:4, Informative)
No.
Back propagation is absolutely not necessary. The only reason we use it is that it's often faster than other methods, but it is in no way essential. As far as the actual operation of the NN after training, back propagation is completely irrelevant.
NNs are absolutely nothing like the human brain. The comparison was stretched from the very beginning and it's absolutely absurd that it has survived this long. It comes really close to outright fraud.
Here's an interesting fact about ordinary feed-forward neural networks that you probably don't know: they're not Turing complete. They have very little computational power. In fact, any such network can be conceptually reduced to a lookup table as all they can do is map input states to output states. Not very exciting, is it? Still, this is the same kind of network used in those text-to-drawing programs! You can get a lot of mileage out of them, but they're not magic.
As far as the NN analogy goes, it's trivial to come up with an equivalent structure that doesn't look anything like a poor-mans idea of a brain. Give it a try and be impressed with yourself. Odds are good that you'll even accidentally come up with something with more computational power.
As for the rest, you might be interested in Lokhorst's somewhat famous paper Why I Am Not a Super Turing Machine [gjclokhorst.nl]. It's a short and easy read that seems to be exactly the sort of thing you'd be interested in.
Re: (Score:2)
NNs are absolutely nothing like the human brain.
They're approximations of biological neural networks, to varying degrees of faithfulness. To say, "absolutely nothing like" is, to quote someone, "really close to outright fraud."
Here's an interesting fact about ordinary feed-forward neural networks that you probably don't know: they're not Turing complete.
Unsure how that's relevant.
There's no evidence that the human brain is, either.
Can a sequence of neurons be made turing complete? Maybe? Almost certainly?
Can an artificial neural network be made turing complete? Absolutely. Trivially, in fact.
In the strictest sense, a feed-forward network isn't, but that's simply because "feed
Re: (Score:3)
Errrrr.... what? It's actually pretty trivial to prove that a human brain is Turing Complete. All you have to do is explain to someone how Turing Machines work and ask them how they'd simulate the running of an arbitrary TM with a given input. If they come up with an answer then they have proven that their brain is Turing Complete.
Re: (Score:2)
Errrrr.... what? It's actually pretty trivial to prove that a human brain is Turing Complete. All you have to do is explain to someone how Turing Machines work and ask them how they'd simulate the running of an arbitrary TM with a given input.
This is untrue. Describing the function of something is not simulating it.
They must be able to actually simulate it. Further, they must be able to simulate all turing computatable functions, in any arbitrary arrangement.
At first glance, it's easy to say, "of course a human could do that", and a human with a piece of pen and paper certainly could.
But there's no evidence I can think of that suggest that the human brain can faithfully simulate such a thing. It simply doesn't store arbitrarily symbolic info
Re: (Score:2)
That's all you need to establish that a brain is Turing Complete really. For something to be Turing Complete it just has to be able to emulate the actions of some description of a TM with some input. That's it. The brain figures out how to do the emulation, and in this example the brain calls upon the body to control pen and paper to keep track of the state. Whether or not the act
Re: (Score:2)
That's all you need to establish that a brain is Turing Complete really. For something to be Turing Complete it just has to be able to emulate the actions of some description of a TM with some input. That's it.
To be turing-complete, it must be able to simulate the function of every possible turing computation to infinite scale, in principle.
The fact that the human brain performs, at best, as an unbounded non-deterministic turing machine strongly indicates it cannot be turing complete.
To prove either way is probably impossible, though.
It's also very easy to demonstrate that the brain is not turing complete in all conditions.
For example, was a brain turing complete before writing was discovered? Was it turin
Re: (Score:3)
And other than a person getting bored, or forgetting something, or losing track of where they are, or running out of space or life, they can. Again, this is all that's required.
Sorry, but this is a math problem that you're thinking of like an engineering problem. It's just. Not. The point.
All models are wrong ... (Score:3)
Some are just more usefully wrong than others.
Re: (Score:2)
Re: (Score:2)
But nobody thinks of AlphaZero or GPT3 etc. as brain models in the first place.
I think you'll find a thread just above where a poster is positing more or less exactly that.
Re: (Score:2)
hydraulics, telegraphs and computers (Score:2, Interesting)
Every technological innovation has led to theories of how the brain works. First, it was hydraulics because of aquifer technology, then there were the telegraph models of how the brain works, and now computer models. But the CNS and nervous system are like all organ systems, they are a part of a living organism and not just some box in the corner of the lab running calculations. When you study in detail how the brain works it becomes obvious that being a "living organism" is something very different than a
Re: (Score:2)
Re: (Score:2)
Computers can model things, but they aren't those things and never will be like them. They work on different principles entirely.
Taken literally, that's obviously false, since a brain is a "computer" of some sort. And software is infinitely malleable, even when running on digital computers. We can "simulate" or "model" anything, including the same exact processes of a brain. In which case there is no functional difference between them.
But I assume you mean: "The software we know how to create today will never be like a (e.g. human) brain".
That's true, since we don't know nearly enough about the brain to simulate or even model it. And
Re: (Score:2)
I am claiming that biology does not operate on the same principles as silicon. Do you actually disagree with that? I have been a neuroscientist since the 1980s, how long have you studied brain structure and function? Putting neurons in a dish, which I have done, has very little in common with a human brain, or a fish brain for that matter. Ridiculously unreasonable? Sure. Modeling a brain function is not the same as performing a brain function, or do you not comprehend that?
Re: (Score:2)
I am claiming that biology does not operate on the same principles as silicon.
And I am claiming that's not relevant, for the reasons given.
Do you actually disagree with that?
Do I disagree with the assertion you just made? No. Do I disagree with the conclusions derived from it? Absolutely.
I have been a neuroscientist since the 1980s, how long have you studied brain structure and function?
A neuroscientist relying on an argument from authority?
Forgive me if I don't believe you.
Putting neurons in a dish, which I have done, has very little in common with a human brain, or a fish brain for that matter.
And yet a brain consists of little more than that scaled up by a factor of trillions.
Ridiculously unreasonable? Sure. Modeling a brain function is not the same as performing a brain function, or do you not comprehend that?
A bar that's not relevant to anything anywhere.
ANNs perform functions. That they also serve as a model is secondary.
What you're doing here, is engaging in a l
Re: (Score:2)
The funny part is, I am agreeing with the article in question, and you are not. And yes, if you are not a neuroscientist, then I can claim that I know more about the subject than you. A lot more. So you can puff about argument from authority, but it just makes you look like a whiner.
Re: (Score:2)
The funny part is, I am agreeing with the article in question, and you are not.
No, you're not in the slightest.
That article makes no such claims as yours. That article is about biases in research.
And yes, if you are not a neuroscientist, then I can claim that I know more about the subject than you.
If you want to engage in fallacious reasoning, sure, you can.
A lot more.
But cannot demonstrate it. One wonders what the utility of this claimed knowledge is, then?
So you can puff about argument from authority, but it just makes you look like a whiner.
I'll translate.
Since I can't make a logically sound argument to back up my assertion, I shall argue from authority, and if you don't like that, neener neener.
Your actual argument is that things that have different fundamental operating pr
Re: (Score:2)
As an outsider, it is clear someone is not coming off well in this discussion.
And it's not the Oregonian.
Re: (Score:2)
That's true, you have been maintaining all along that the MIT researchers are mistaken, and that I am mistaken. But you haven't presented any argument as to why we are mistaken. So you have not come across as knowledgeable about neuroscience. Can you tell us anything about glutamatergic neurotransimission, how it works, how glutamate is accumulated in synaptic vesicles, is released, taken back up, recycled through glutamine in astrocytes? Can you tell us anything about brain structure? I can fill you in if
Re: (Score:2)
That's true, you have been maintaining all along that the MIT researchers are mistaken, and that I am mistaken.
You replied to the wrong person, Mr. Neuroscientist ;)
Further, this claim is demonstrably false.
I have maintained, all along, that the MIT researchers made a very different claim than you have made, and that I fully agree with their conclusion.
But you haven't presented any argument as to why we are mistaken.
You. Not them.
And yes, I have.
I demonstrated your lack of evidence and use of fallacious arguing.
So you have not come across as knowledgeable about neuroscience.
Nor did I need to. It was easy enough to demonstrate that you have a poor grasp on logic.
Can you tell us anything about glutamatergic neurotransimission, how it works, how glutamate is accumulated in synaptic vesicles, is released, taken back up, recycled through glutamine in astrocytes?
Wait, are you trying to impress me with your understanding of neurochemistry?
Re: (Score:2)
You have no idea about anything in neuroscience do you? What is your degree it? You get more and more nasty with each snide reply because you don't know what you are talking about. You would never be this way if you were in the same room with me. Your arrogance is truly obnoxious. Fucking ancient? Wow, you really are an asshole aren't you? Good luck in the world dude. You have no published papers, but you know everything, don't you? Trying to have a discussion with an asshole is just a waste of time.
Re: (Score:2)
You have no idea about anything in neuroscience do you? What is your degree it?
More of this shit?
You get more and more nasty with each snide reply because you don't know what you are talking about.
I think you're going senile.
I'd say hostilities started when you started pulling your bullshit argument from authority.
You would never be this way if you were in the same room with me.
You underestimate me, or overestimate yourself. That seems to be a trend for you.
Wow, you really are an asshole aren't you?
Yes.
Good luck in the world dude.
While luck has been a part of my success, most of it has just being very good at what I do.
So I'm not terribly worried about my luck.
You have no published papers, but you know everything, don't you?
And you have none on this topic, but you know everything don't you?
Trying to have a discussion with an asshole is just a waste of time.
You didn't try to have a discussion, what you tried to do was misattribute your knowledg
Re: (Score:2)
Just in case you thought I made up my original claim, see here;
https://pubmed.ncbi.nlm.nih.go... [nih.gov]
Re: (Score:2)
Your link states: "Although not every technological breakthrough has been used as an analogy or model of brain function, the list of those that have is very long."
YOUR OWN LINKS STATES YOU ARE INCORRECT.
https://sci-hub.se/10.1353/pbm.2002.0033
Re: (Score:2)
I mean: "how long have you studied brain structure and function?"
Never!
"And yes, if you are not a neuroscientist, then I can claim that I know more about the subject than you. A lot more. So you can puff about argument from authority, but it just makes you look like a whiner."
This whole discussion is making you look rather foolish.
We have always described the brain poorly (Score:2)
In older, 18th, and 19th books the brain is illustrated with images of steam engines and gears. Again, the dominant technology of the day.
Y
Re: (Score:2)
"Even humans do not multitask well at all"
Not true, I am successfully typing this post while breathing, regulating my body temperature, hearing sounds in the background, and any number of other tasks.
Re: (Score:2)
Another example closer to where I am, a student just walked up and asked me a question about a variable in a program. I clearly needed to stop typing and answer his question. Not to try to perform
In unrelated news (Score:2)
The myelin sheath... (Score:2)
...counts for computation in the brain. Add hormones. Add various organ interconnections. Add gut flora.
Human cognition ultimately will have to be modeled at the atomic level, for most of an entire body (you can probably simulate the body of a quadriplegic, and skip arms and legs).
Biology is doing something way more complex than we imagine.
Re: (Score:2)
You are overlooking probably the most important organ that does most of a guy's thinking.
Re: (Score:2)
To whit: Dendrites perform vast amounts of analog calculations.
http://www.mit.edu/~9.54/fall1... [mit.edu]
AI is missing something, and has been for decades (Score:2)
Making neural networks and other simulations run ever faster or trying to throw a database of facts at an AI was never going to substitute for that missing part.
Do AI researchers recognise this, or do they just think they need to train their models better ?
Re: (Score:2)
Just not at the crucial level where learning is done.
They wrote "Neural Network" in the headline where they meant "Artificial Neural Network", which was confusing rather than total nonsense.
In other words: (Score:2)
Try again, humans. Maybe you'll get it right one of these centuries.
Re: In other words: (Score:2)
That's not totally fair. We may not get to general intelligence, but we have already got all kinds of useful things out of AI research. DALL-E does some really amazing stuff that people believed were impossible a few years ago. (Almost) self driving Teslas are pretty amazing. Reading hand written letters is pretty amazing.
No caution needed (Score:2)
"Neural networks" have nothing to do with the brain. Just throw caution to the wind, and reject any claims to the contrary by PR flacks and so-called know-nothing "journalists."