Just How Computationally Complex Is a Single Brain Neuron? (quantamagazine.org) 131
Long-time Slashdot reader Artem S. Tashkinov quotes Quanta magazine:
Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?
To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected "neurons" to represent the complexity of one single biological neuron. Even the authors did not anticipate such complexity. "I thought it would be simpler and smaller," said Beniaguev. He expected that three or four layers would be enough to capture the computations performed within the cell.
Timothy Lillicrap, who designs decision-making algorithms at the Google-owned AI company DeepMind, said the new result suggests that it might be necessary to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning.
The paper's authors are now calling for changes in state-of-the-art deep network architecture in AI "to make it closer to how the brain works."
To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected "neurons" to represent the complexity of one single biological neuron. Even the authors did not anticipate such complexity. "I thought it would be simpler and smaller," said Beniaguev. He expected that three or four layers would be enough to capture the computations performed within the cell.
Timothy Lillicrap, who designs decision-making algorithms at the Google-owned AI company DeepMind, said the new result suggests that it might be necessary to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning.
The paper's authors are now calling for changes in state-of-the-art deep network architecture in AI "to make it closer to how the brain works."
Dendrite growth (Score:5, Insightful)
That is just activation, it doesn't take into consideration how the neurons grow connections to each other.
Re:Dendrite growth (Score:5, Informative)
I am a neuroscientist and I can assure you that we have barely scratched the surface. Consider this, we still don't know how memory works. We have lots of intriguing theories, and plenty of data, but the key underlying molecular and cellular mechanisms remain unknown. Then take consciousness as another example. No clue there. There are many fascinating theories, one of which may turn out to be on the right track eventually, but right now it is a complete unknown. We really don't know how to explain either of these basic brain functions. So yes, we have barely scratched the surface, and my guess is they will need to add more layers than 8 after they have a better understanding of how things actually work. Then you need to think about how many billions of neurons there are, and how many trillions of synapses, and you start to get the idea of the complexity that we are trying to deal with.
Re: (Score:2)
Consider this, we still don't know how memory works. We have lots of intriguing theories, and plenty of data, but the key underlying molecular and cellular mechanisms remain unknown.
Interestingly, that is one thing computer neural networks don't do very well at all. It's not an easy problem, but it's important. Figuring out how memory works would be a major step forward in AI, if not the major step forward.
Re: Dendrite growth (Score:2)
But, but the brain's just complex binary circuitry (Score:2)
I mocked you then and I'm mocking you now.
Indeed. (Score:4, Interesting)
Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia [wikipedia.org]. Whether our scientific models have room for authentic subjective experience or not, that is my "level zero" reality and all the scientific understanding that I do have was facilitated to me by means of it.
Do computers have qualia? If we replace the perceptron with something more sophisticated, will it have qualia then? If my brain is mapped, neuron-by-neuron, into a computational model which passes any Turing test we can invent, including adamantly insisting that it has qualia, will it actually have qualia?
I wish these questions were answerable. I think the prospect of digital consciousness is wildly tantalizing. But there isn't any objective way of detecting qualia in anything. For all I know, I am the only person in the universe who actually experiences it, and all others are just bio-mechanical zombies that talk about it because they were programmed to.
Re: (Score:2)
First I might recommend figuring out what neurons **do** as well as how.
Re: (Score:2)
For all I know, I am the only person in the universe who actually experiences it, and all others are just bio-mechanical zombies that talk about it because they were programmed to.
I asked myself that once, and after mulling it over conversationally in my head and after determining all 8 billion humans were just meat zombies, decided to go just one further.
Re: (Score:2)
Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia.
So how do you tell the difference between experiencing it, and being programmed to think you're experiencing it?
Re: (Score:2)
To answer that, I must draw an important distinction.
If I am looking another person in the eye, and that person tells me "I am experiencing this or that," I have no way of knowing whether that person is actually having an experience, or is just making noises because they are a zombie that has been biologically "programmed" to say "I am experiencing this.".
That is to say, there is no way to test for "consciousness" in another person.
When I am reflecting on my own thoughts, the story is different. My experie
Re: (Score:2)
Why are you so sure that there's a difference?
Mice: We will need to replace Arthur's brain (Score:2)
with a computer. Nobody will know the difference. Arthur: I will! Mice: No you won't, you'll be programmed not to.
Adam's nailed it pretty well.
The big thing is that it does not matter if computers experience conciseness in our sense of the word. It only matters that they behave intelligently. And in any case our own sense of conscious control is rather vague.
Re: (Score:2)
The best I can do is make some practical inferences like...when I experience pain I say ouch, and when other people receive injuries (that would hurt me) they also say ouch, so probably they are also having experiences.
Maybe this is the best you can think of but it’s not the best we can do at all. You assume that your brain isn’t subject to physics and that physics isn’t the same everywhere. With maximal understanding you could actually see the standard model of physics completely enact those sensations, responses, and emotions, in real time too if your technology and sensors were good enough. Thus it is empirically possible to export that to other systems using your own validation for those systems.
Re: (Score:2)
I think you are looking for a disagreement where there isn't one.
For example, I don't assume "my brain isn't subject to physics" nor that "that physics isn't the same everywhere." Nor am I asserting that "meat computers are magic". The entire thrust of your post goes beyond the much simpler point that I was making.
When our sense organs are stimulated, neurons fire. That's a readily observed fact, as are several other related facts about the brain, and I am not challenging any of them. You could maybe ho
Re: (Score:2)
You can say that, but you can't prove it. Which makes your claim as valid as an AI entity stating the same.
Re: (Score:2)
Well, current Physics has no place for qualia. It is just not part of the theory at all, hence purely physical objects cannot have it according to current theory. There are some quasi-mystical bullshit attempts to try to explain this away, but they are just nonsense. The fact of the matter is qualia is real and we have absolutely no explanation for it and no clue what it is or how it works.
As to computers, computers cannot have qualia (again, there is no mechanism for it), unless qualia is completely passiv
Re: (Score:2)
Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia [wikipedia.org].
That's exactly what a philosophical zombie would be programmed to say!
Re: (Score:2)
Yeah, I'm being mean to all the morons who just a couple years ago were talking about uploading your consciousness to a machine because "the brain's...".
I mocked you then and I'm mocking you now.
I am with you in that. No idea how these people ever got that demented idea, but looking at some other things currently happening it really is true that the human capacity for stupidity is unlimited.
Re: (Score:2)
First, figure out the target equivalency.
Re: (Score:2)
Encoding equivalent complexity is one thing - operating a conscious mind may be another altogether.
Every mind we know of operates on a continuous, analog, asynchronous substrate, right at the boundary where quantum uncertainty influences every aspect without dominating more orderly behaviors, and that may be necessary for consciousness.
Simulating a conscious mind as software running within a clocked digital environment may be inherently impossible. You might need dedicated hardware for such a thing - nothi
Re: But, but the brain's just complex binary circu (Score:2)
I don't think that the tech is the actual problem. We can simulate quantum effects well enough for every purpose, it "just" tends to cost an exponential amount of time and memory. But if consciousness is, at its base, some kind of quantum effect, then it's just a matter of building a computer that's big enough.
But what if there's am ingredient that's not the sum of the brain's physics? A "soul"?
Imagine a cell phone: you could build one from scratch, but you wouldn't be able to surf the web or make any calls
Re: (Score:2)
Indeed. The cell-phone analogy is a nice one.
The fact of the matter is that we only know there is a rather large hole in currently known Physics when you ask about consciousness and intelligence. Both seem not to be covered at all in what we know physical objects can do. There is no telling how (or if) that hole will eventually be closed, but it will at the very least require a fundamental extension. Hence claiming a human mind is just something going on in a brain is pure conjecture, not actual Science.
Re: (Score:2)
>The fact of the matter is that we only know there is a rather large hole in currently known Physics when you ask about consciousness and intelligence.
Not really, that's actually a huge assumption. It may well be that our understanding of physics is completely fine (at least so far as brain functioning is concerned), and the only hole is that our understanding of the biological mechanisms of consciousness is still almost completely nonexistent.
I mean, we're barely beginning to understand the intricacies
Re: (Score:2)
>We can simulate quantum effects well enough for every purpose,
Actually, no, we really, REALLY, can't.
What we can do is extremely accurately predict the probability distribution of a range possible outcomes from a quantum system, but that's a sum of an infinite number of possibilities, most of which mostly cancel out due to destructive interference. But unless you're talking about an experiment specifically designed to severely restrict those probabilities there's still an immense number of possible out
Re: (Score:2)
according to broadly accepted theory it's theoretically impossible to predict which specific outcome will result.
You're holding the simulation to a higher standard than reality itself - the reason we can't predict is because the outcome is random. We can simulate random, no problem. We do this every day.
But for quantum influence on a single synapse or internal cell action [...]
The quantum influence on a single synapse is irrelevant. A single synapse or neuron is irrelevant. Large-number quantum mechanical effects of many synapses or neutrons are what make a difference, that then we're back in business.
(Side note: did you know that your cell phone has all kinds of quantum effects? Most of the
Re: (Score:2)
> the reason we can't predict is because the outcome is random.
To be clear, we don't actually know that. The most popular interpretation of Quantum Mechanics does make that assertion, but we have no experimental data to prove it, in fact I'm not sure there's any theoretical way to prove such a thing - all we know for certain is the probability distribution of the outcome.
By way of contrast, the Pilot Wave family of interpretations, which after long being set aside is regaining proponents and growing in
Re: (Score:2)
The problem with this approach is that borders on attributing the limits of scientific understanding into the metaphysical, divine, etc.
Everything was outside of scientific understanding at one time. Science is just the process of us documenting our reality.
If your hypothesis that our brains are hot-wired into some other-dimensional entity can be tested and "proven" than it can become part of our scientific understanding.
Indeed. At this time it is important to remember we do not know and to not come up with any mystical bullshit and mistake that for "fact" or "Science". Science has not problem with the unknown, just some humans do. Although it is possible we will not find out or never get anything reliable. The complexities involved seem pretty extreme and the only tools to help us (computers) seem to be currently peaking in their power at a point that is nowhere near these complexities.
Re: But, but the brain's just complex binary circ (Score:2)
If your hypothesis that our brains are hot-wired into some other-dimensional entity can be tested and "proven" than it can become part of our scientific understanding.
In principle yes, but practically I'm afraid it's not that simple.
To test something like this, you'd have to identify a mechanism first. How could the brain communicate?
Our understanding of physics is that there are four - and only four - fundamental interactions: strong, weak, electromagnetic amd gravitational. Strong and weak are intranuclear only. Electromagnetic we can pretty much rule out (our brain doesn't broascast radio waves, we cam measure that). And for gravity, it's too small to make amy signifi
Re: (Score:2)
>To test something like this, you'd have to identify a mechanism first. How could the brain communicate?
Eventually - but you're jumping *way* down the road of scientific discovery. Barring brilliant inspiration or a lucky find, what you'd probably really do is:
First: monitor things *very* closely, looking for *any* behavior that's not what you'd expect from a strictly physical brain - anything that would turn out differently depending on whether there was a "soul" involved or not. You don't need to kno
Re: (Score:2)
You don't need to know how it works, just to prove whether or not that something is at work that causes things to behave in a manner differently than predicted strictly by the known forces.
The problem is that we're not able to replicate what a "strictly physical" brain is supposed to do. We don't (even) know how that works.
If you can prove that some such thing exists [...] then you can start trying to interfere with it - e.g. cause the brain to resort to operating in a strictly physically predictable manner.
I think the problem here is too complex. And "physical manner" tends to be a real bitch. For example electrons and superconductivity: from the properties of a single electron it's essentially impossible to derive something like superconductivity, but when a whole bunch of electrons come together in a crystal, under specific conditions, "quasi-particle" behavior emerges. Ele
Re: (Score:2)
> our brain doesn't purposefully send out EM waves
You sure of that? I've never of any experiment even claiming to prove such a thing. Heck, I haven't even heard of any equipment sensitive enough to measure the intricate patterns of EM radiation radiated by our brains, much less the subtle interactions they unavoidably have on the environment, including other such massively complex transmitters nearby.
Now, I wouldn't bet on there being any intentional interactions... but I would bet quite a bit that we'
Re: (Score:2)
And then we can travel to the stars. Not in our present form.
I guess that depends on whose brain (Score:5, Funny)
Re:I guess that depends on whose brain (Score:5, Funny)
Re: (Score:2)
I've known people who had many neurons (presumably) but very little coherent thought
Here is a little secret: That is the majority of people.
Complexity (Score:2)
Re: (Score:2)
"Every neuron is a little compute device"
Yes, each one made up of nucleic acids, carbohydrates, lipids, about 40 million proteins and 100 trillion atoms.
Re: (Score:2)
The complexity of each neuron is more akin to a large petrochemical plant.
Re: (Score:2)
The complexity of each neuron is more akin to a large petrochemical plant.
That is underestimating it rather grossly....
Re: (Score:2)
Overestimating it, I think. If you'd like to go back to the original papers, I'll recommend the often cited paper by McCulloch and Pitts, in 1943, about the computational complexity of neurons.
https://www.cs.cmu.edu/~./epxi... [cmu.edu]
It's stiff reading, but it's the foundation of a lot of later work in neurophysiology and, unsurprisingly, in AI. The bits about local feedback loops, and understanding them, are very insightful. The grandiose philosophical claims at the end are very insightful as well.
This still does not account for everything (Score:5, Informative)
Re: (Score:2, Interesting)
Re:This still does not account for everything (Score:5, Interesting)
The largest artificial neural net models are actually getting quite close to human "brain size" in terms of complexity.
As you say, the human brain has around 80-90B neurons, and the cortex alone (main part responsible for intelligence) has around 125T synapses (inter-neuron connections).
The ANN equivalent to synapses are it's connection weights, aka parameters, and these are largely what limit their size - the amount of data and compute needed to train them. One of the largest models from a US company is OpenAI's GPT-3 which has around 175B parameters (synapses) and China'a similar Hua/Wudao 2.0 has about 10x that, so about 1.75T "synapses" !! These things are growing very fast! The "AlexNet" model that started the "deep learning" revolution in 2012 only had 62M parameters, so that's a growth factor of 10,000 in less than a decade. We (well, Google, or OpenAI) could be running human-like 100T parameter models in a year or two assuming this keeps up.
The real thing stopping us from simulating a human brain, or building a human-inspired AI of similar complexity is that we simply don't know enough about how the brain is wired (what the detailed architecture is) and how it learns. If compute was the only issue, we should at least be building rat and birds brains by now, but of course we have no better idea how those work either.
I tend to think that when we do figure out how to design brain-inspired intelligent architectures, we'll be able to do it at a much higher and more efficient level than synapse-by-synapse simulation, maybe at the functional (vs synapse) level of simulating entire cortical mini columns, or perhaps more likely the first intelligent architectures will be extensions of things like GPT-3 that really arn't based on the human brain at all (other than being based on ANNs).
Re: This still does not account for everything (Score:2)
Re: (Score:2)
Serious question: Do you think your sex drive is any different than one that could be built into a robot (should anyone be inclined to do so)?
Our sex drive is purely mechanical and genetically determined. You see curves and jiggle, you get radar lock and a boner!
How you feel/think about what's going on is another matter - but that's just your brain after-the-fact coming up with explanations for what your body is doing. A robot with AGI and a hardwired (cf genetically determined) sex drive might similarly re
Re:This still does not account for everything (Score:5, Interesting)
The ANN equivalent to synapses are it's connection weights, aka parameters, and these are largely what limit their size
Yes, but it’s not clear that we are properly capturing all of the important function from them and aren’t missing a thing or two.
The real thing stopping us from simulating a human brain, or building a human-inspired AI of similar complexity is that we simply don't know enough about how the brain is wired (what the detailed architecture is) and how it learns
Absolutely, it’s been fairly clear for perhaps 20 years at least that we would be limited by our understanding and approach and not hardware. The big focus lately has been to try and throw the most computing power or the most detailed and accurate sensors (for physical applications anyhow) when it’s pretty clear that actual living creatures make due with rather poor sensors, incomplete information, and often (for the simple ones) even a lack of raw computing power. It’s my opinion that it’s hubris to think we can reverse engineer over three billion years of evolution in only about a hundred years of computing technology, but it’s also hubris to think that somehow meat computers are magically special and can’t ever be artificially copied in any way and retain the core essence of what it means to be sentient. 5,50, or 500 - it will happen if we keep progressing.
Artificial intelligence will not be like our brain (Score:2)
Different hardware. Different constraints. Engineered, not evolved.
Sure, we learn from nature, but we will engineer our own thing in our own way.
Artificial neurons are quite different from natural ones. Much simpler. Much faster. And there is much more to AI than just artificial neurons.
Re: (Score:2)
Agreed.
I'll be a long time before we're not only able to build artificial brains, but also to have them even remotely as efficient as a bio-brain. Compare to how long we've been building CPUs and yet asychchronous data-flow designs are still a research project.
There are similar attempts underway to build spiking neural net brain models, maybe partly based on the same asynchronous/dataflow (vs clocked) efficiency goals, although it rather smacks of cargo-cult design (let's add spikes - that'll make it intell
Re: (Score:2)
Another hugely interesting property is the timing aspect. Synapses are known to respond not just to the presence of action potential but the inter-spike timings.
Re: (Score:2)
Re: (Score:2)
This is not intended to account for everything. It is intended to show things are a lot more complex than expected. Well, expected by morons, but you find a lot of these in the sciences, people very capable in a narrow sub-specialization and completely clueless outside of it.
Re: This still does not account for everything (Score:2)
All of this goes to show that we are still decades away from being able to mimic moderately complex brains like the ones in birds and larger mammals.
Yah, I'll say... at least, but moderately complex? Our sense of scale is waaaay off on this. I'll tell you right now, this is going to be another flying car thing where people just don't grasp what needs to happen between now and some point in the future we could definitely get to, but with enough time. We're not at the point we can call the mind of a predatory insect or a jumping spider "simple" yet. We've just barely scratched the surface of how living things do any kind of pattern recognition, and th
Biomimicry (Score:2)
What is the obsession with biomimicry? I mean, biological systems sucks. They have many constraints. Evolution is a haphazard deal. If we Karl Benz had focused on biomimicry to build an automobile, our fastest vehicle would be no faster than a cheetah and tire out in 20 seconds. Aeroplanes would never carry humans and rockets wouldnâ(TM)t even exist. While it may suck at first, synthetic will exceed bio. Buildings would be made out of calcium instead of steel and never exceed a couple of storeys. We sh
Re: (Score:2)
What is the obsession with biomimicry? I mean, biological systems sucks.
Because the brain still does things computers can't do.
Re: (Score:2)
What is the obsession with biomimicry? I mean, biological systems sucks.
Because the brain still does things computers can't do.
Even worse: A smart human person (and no, we do _not_ know whether it is just the brain, that is conjecture) can do things we do not even have a credible theory for that could explain how it could be done with computations given the limits of this physical universe.
Re: (Score:2)
A smart human person (and no, we do _not_ know whether it is just the brain, that is conjecture) can do things we do not even have a credible theory for that could explain how it could be done with computations given the limits of this physical universe.
Which sorts of things are you thinking of here? I like seeing these kinds of examples.
Re: Biomimicry (Score:2)
Re: (Score:2)
What about automobiles? We abandoned walking propulsion because it was too hard, and then did a hell of a lot better. Only recently are we able to make walking machines. Airplanes -- we ditched trying to mimic flapping and feathers and did better. And rockets, ..even though jellyfish do it .. I am sure humans discovered the reaction principle on their own outside of biology .. and that has taken us to the moon. Staying boxed into biomimicry isn't always optimal. Neural nets may not be the most optimal way t
Re: (Score:2)
Neural nets may not be the most optimal way to do intelligence
Optimal? It's very obviously not a way to "do intelligence" at all.
Re: (Score:2)
If we Karl Benz had focused on biomimicry to build an automobile, our fastest vehicle would be no faster than a cheetah and tire out in 20 seconds.
Nonsense!
Aeroplanes would never carry humans
Bollocks!
and rockets wouldnÃ(TM)t even exist
Poppycock!
A running animal's gait can be described in terms of circles. The shape of an airplane's wing was developed from the wings of birds. There are sea creatures which propel themselves by expelling liquid in the direction opposite their intended direction of travel, and insects that literally boil liquid and squirt it out.
While it may suck at first, synthetic will exceed bio
What synthetic does is you can make things any shape you want, not just some shape you've discovered accidentally. This turns out to be a total game-changer.
Re: (Score:2)
What is the obsession with biomimicry?
Simple: The other approaches have soundly failed to produce anything that even remotely resembles intelligence, despite a lot of effort invested. But this one will likely too. It is just more complex and less understood, so it will take longer for the failure to become obvious.
Re: (Score:2)
Tell that to the people who died from microscopic virus.
Re: (Score:2)
Machines die from ... air and water. That's called rust. Not even a virus is required. Again, biological machines are far more robust than mechanical ones with the only exception that higher forms of life can operate in a narrow range of temperatures, pressure, gravity and require a certain amount of oxygen in the air. But then airplanes require a ton of external maintenance to be able to fly in subzero temperatures and they will likely fail if tempera
Mathematically this is not "true". (Score:3)
They showed that a deep neural network requires between five and eight layers of interconnected "neurons" to represent the complexity of one single biological neuron.
The universal approximation theorem states that an arbitrarily wide 3 layer network can approximate any function to arbitrary precision. The rub is in how wide is arbitrary. And wide, shallow ANNs are harder to train it seems than deeper, narrower ones. I'd like to know exactly what they mean, but "Neuron" is paywelled. :(
Re: (Score:3)
Re: (Score:3)
Thanks! I hope you don't mind an unedited stream of consciousness.
This is very bio-paper. Fortunately having done some work in biology I'm a bit (but not very practised) at reading them. Much harder for a DL person to read than a bio person for sure. No mention of an ablation study by name, so hard to find where they pare back bits to see what the minimal necessary is.
Also being a bio paper about deep learning, the paper is all bio, and like NONE of the details of the deep learning are in the paper. They're
You Brain Is Not a Computer (Score:2)
Your brain is not a computer. It contains no data, it performs no calculations. That's simply not how brains work.
It is human nature to compare the working of the brain to the most advanced technology of the time. Back in the Bronze Age, that was weaving, ceramics, and distillation. Thus, humans were made from clay and animated by the spirit/breath of God. The fact that the visible vapors coming off of distilled spirits are flammable is where the imagery of the soul as an eternal flame comes from. Think Pe
Re: (Score:3)
Your brain is not a computer. True.
It contains no data False.
it performs no calculations False.
That's simply not how brains work. False.
I am not sure what you are on about, but what you said is easily demonstrated to be false. For example, I have my phone number memorized. That's data, and it is stored in my brain. What would you challenge about this? Do you think it is stored in my toenail? Or that a phone number somehow doesn't qualify as "data?" What in the world else would it be???
I can do ment
Re: (Score:2)
Computers are made of silicon and such, and have no biological components at all.
The point is not what a computer or a brain is made out of (silicon or biological bits). What matters is what the brain does, compared to what a computer does. I think the idea is that most of what the brain does is not what one could class as computing, or at least, that is yet to be demonstrated. The fact that human brains can do recognisable computation does not mean that computation is what the brain does most of the time.
I design a fair bit of analog electronic circuitry. There is no actual computing d
Re: (Score:2)
Re: (Score:2)
Actually, at least living human brains with help from an attached consciousness can usually store data and can perform computations. They are not very good at it, admittedly, and it is clear a lot of translation to other mechanisms is needed to do it at all. And absolutely nobody knows how that "consciousness" thing works, what it is or how it interfaces with a human brain. All we know is that it is essential and that a human brain without it can only do very basic house-keeping stuff.
Physicalists: Stay out
Re: (Score:2)
Re: (Score:2)
Define consciousness. The entire concept of consciousness is a philosophical leftover from Christian dualism. It is a replacement for the concept of a soul to distinguish humans from animals. Which is nonsense from an objective perspective.
You have it ass-backwards. We find consciousness to exist. Hence we cannot define it. We can only define the word for it and for that I direct you at many nice definitions you can find online. Incidentally, you are wrong. The recognition of consciousness is far older than the Christian religion and far more universal than their limited ideas.
Re: (Score:2)
Re: (Score:2)
It's not a Turing machine, but I'd urge you to study any of the good papers on how neurons work that compare them to individual computational elements. I've already cited this one:
https://www.cs.cmu.edu/~./epxi... [cmu.edu]
Re: (Score:2)
Re: (Score:2)
Can you cite a single fundamental error? It is, indeed, nearly 70 years old. Microscopes and electronics have improved since then, so there was certainly room for refinement. But the ideas of small feedback loops interacting to create quite sophisticated sensory or processing systems seems well established.
Re: (Score:2)
Correcting myself, the paper was nearly 80 years ago.
Re: (Score:2)
First of all, there is no evidence at all, that our brains would utilize anything similar to a loss function with a single output, nor backpropagation. These are the essential elements of all neural networks, no neural network can train itself without these. Most neural networks have layers too: no evidence at all that our brain would arrive to conclusions using similarly layered structure. Oh well, you could say, they do have “neurons”, right? Ok, but they are completely different, and if you actually use this comparison for anything, you get into trouble.
https://medium.com/@zabop/why-... [medium.com]
Here's a paper from this century. [hilarispublisher.com]
Re: (Score:2)
Great: Where did the original paper I created say there is no backpropagation, no feedback? The models didn't provide extensive feedback because feedback often occurs at higher layers of mentation, layers too complex to model for such a restricted element. The response to a cold sensation relies extensively on local processing, but the model need not include the higher level behavioral feedback of "why did someone expose their flesh to cold".
> Most neural networks have layers too: no evidence at all that
"Deep" learning is not deep (Score:2)
It is not "deep" in the things it can do. It is just deep in the net used which means it can be made generically and trained more. It will still give worse results than a shallow, custom-designed neural net. We are already seeing severe limitations in what training can do, and the "deeper" the net, the worse these get. Even carefully trained networks can be tricked and exhibit surprising behavior. It is time to acknowledge that neural nets are a nifty special-purpose tool with specific limitations, but not
Re: (Score:2)
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
https://www.youtube.com/watch?... [youtube.com]
GPT-3 is just a powerful type of "language model". It can only do one thing - predict the next word in some text that you feed it. Of course you can then feed the word it predicted/genretated back into itself and have it predict/generate the *next* w
Re: (Score:2)
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
Not really. With some more relevant background (which I have) these are nice tricks, but not that impressive. For example, the average entropy of an English word in a text is just around 2.5 bits, so guessing the next word is not actually that hard. Of course the guessing errors pile up and eventually the whole thing descends into chaos and nonsense, because GPT-3 has zero understanding of what it does and zero understanding of the real world. Scaling up "dumb" does not remove the dumb. It just makes it a b
Re: (Score:2)
Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.
Your argument is like saying the laws of physics can never produce intelligence if you scale them up. But they do.
You could also argue that humans doing "intelligent" stuff like designing rockets, or whatever, makes it hard to see that we'e really just a mass of dumb old chemicals. And you'd be right.
You either didn't watch the OpenAI Codex demo video I linked, or you didn't understand what you w
Re: (Score:2)
Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.
Pure conjecture. And not even a very convincing one. Physicalism is religion, not Science. Since it is conjecture, any conclusions drawn from it are also conjecture.
Here is a hint: You cannot use an argument by elimination unless you have a complete and accurate model of the system giving you _all_ possibilities. Currently known Physics is _known_ to be incomplete and that makes this approach invalid as a proof technique. This is really very basic.
As to your invalid AdHominem-type argument, I have been foll
Re: (Score:2)
> Pure conjecture. And not even a very convincing one.
The last 50 years of neuroscience would beg to differ. The biochemical operation of neurons at this point is understood in fairly exquisite detail, and it's just good old fashioned chemisty.
I've no idea what you mean by the laws of Physics being incomplete ... no doubt there are new discoveries and to be had, but nonetheless the behavior of the universe, and everything in it, is *perfectly* predicted by a combination of quantum theory and the theory o
Gordon Moore wants to know (Score:2)
ML= layer complexity DeepMind NN 5X-8X == HumanNeuron what if AI = 10X then if 100X==MATRIX complexity does1000X approach universe understanding exigencies?
Assuming 15 dendrites and one terminal axon button (Score:2)
Could the information space can be defined as a probable number of discrete states between 2 to the 16th power and 16 factorial (We don't actually know how order dependent these states are)?
I don't know how else we'd measure this. In aggregate, I guess this could be considered the "Neuron" layer of information processing.
Am I wrong on this?
You know they _modelled_ the neurons, right? (Score:2)
They developed what they believe is a reasonable model of a spiking neuron (and we don't know the coding of spike trains by the way), and compared it to a DNN (continuous and actually not as well understood as you'd like to think). From that they arrive a grand conclusion about complexity between real neurons and DNN's.
Not only is that comparing apples and oranges but the apples aren't even real. It's like comparing a painting of what you think apples are like to real oranges ... and then concluding that
Re: (Score:3)
Re: (Score:2)
Definitely on a massive level of complexity. So far beyond current computers or current software that these seem to be stick figures scribbled on a cave wall in animal blood.
Re: (Score:2)
Re: (Score:2)
It’s not really complexity in my opinion, so much as chaos. For however bad that unpaid interns undocumented spaghetti code is, nothing compares to the sheer chaos of evolution. Just like ANN and trying to see “how” or “why” they work by extracting useful rules or algorithms is practically impossible in many cases, but far, far worse.
Depends on the context. Since this "chaos" works and creates a result, I would call it "complexity". How much randomization is in there is unclear at this time.
Re: (Score:2)
Re: (Score:2)
Yeah, I think a few housefly brains networked together would have more 'intelligence' than the so-called 'AI' they keep trotting out.
Friend of mine in robotics research thinks so too. Apparently a single housefly or ant or bee would already do much better than "AI" does these days, and at a massively lower power and volume cost.
Re: (Score:2)
Because we're still at the 'poke it with a stick' phase of inquiry.
We haven't gone much farther than 'brains are gooey'.
Re: (Score:2)
Because we're still at the 'poke it with a stick' phase of inquiry.
We haven't gone much farther than 'brains are gooey'.
And that is exactly it. We know almost nothing at this point.
Re: (Score:2)
I believe this is called the "Steve Irwin" stage of scientific understanding.
Crikey! It's an angry one! I'm gonna poke it with a stick....
Re: (Score:3)
That ship has sailed. People use the term "Artificial Intelligence" as an very, VERY broad way that covers many kinds of algorithm and heuristic. Whenever a computer does something that would, otherwise, require intelligence to do, we call it "artificial intelligence."
Chess is a perfect example. Because, apart from computers, only humans play chess. Dogs can't do it. Rocks sure can't do it. Only humans can do it, and in fact the smart ones are better at it than the not-so-smart ones. So, our computer
Re: (Score:2)
People use the term "Artificial Intelligence" as an very, VERY broad way that covers many kinds of algorithm and heuristic.
Exactly... I once took an AI class where the professor quoted someone, something like this: "What we call AI doing something today is merely 'the algorithm' for doing that thing tomorrow".
Think of how A* search is in many AI textbooks... but really, it's just an efficient algorithm for searching graphs (e.g. finding the best route on your map). The "intelligence" isn't in the algorithm but in the people how created it.