Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science

Just How Computationally Complex Is a Single Brain Neuron? (quantamagazine.org) 131

Long-time Slashdot reader Artem S. Tashkinov quotes Quanta magazine: Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?

To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected "neurons" to represent the complexity of one single biological neuron. Even the authors did not anticipate such complexity. "I thought it would be simpler and smaller," said Beniaguev. He expected that three or four layers would be enough to capture the computations performed within the cell.

Timothy Lillicrap, who designs decision-making algorithms at the Google-owned AI company DeepMind, said the new result suggests that it might be necessary to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning.

The paper's authors are now calling for changes in state-of-the-art deep network architecture in AI "to make it closer to how the brain works."
This discussion has been archived. No new comments can be posted.

Just How Computationally Complex Is a Single Brain Neuron?

Comments Filter:
  • Dendrite growth (Score:5, Insightful)

    by phantomfive ( 622387 ) on Saturday September 04, 2021 @11:44AM (#61763059) Journal

    That is just activation, it doesn't take into consideration how the neurons grow connections to each other.

    • Re:Dendrite growth (Score:5, Informative)

      by The Real Dr John ( 716876 ) on Saturday September 04, 2021 @04:17PM (#61763741) Homepage

      I am a neuroscientist and I can assure you that we have barely scratched the surface. Consider this, we still don't know how memory works. We have lots of intriguing theories, and plenty of data, but the key underlying molecular and cellular mechanisms remain unknown. Then take consciousness as another example. No clue there. There are many fascinating theories, one of which may turn out to be on the right track eventually, but right now it is a complete unknown. We really don't know how to explain either of these basic brain functions. So yes, we have barely scratched the surface, and my guess is they will need to add more layers than 8 after they have a better understanding of how things actually work. Then you need to think about how many billions of neurons there are, and how many trillions of synapses, and you start to get the idea of the complexity that we are trying to deal with.

      • Consider this, we still don't know how memory works. We have lots of intriguing theories, and plenty of data, but the key underlying molecular and cellular mechanisms remain unknown.

        Interestingly, that is one thing computer neural networks don't do very well at all. It's not an easy problem, but it's important. Figuring out how memory works would be a major step forward in AI, if not the major step forward.

      • Memory, consciousness,... What about preferences, pain, pleasure, fear, worry, hope? I always (naively for sure) said that it they'd create a machine that could feel pain, I'd be able to make it think, or at least react. A machine doesn't care about its continued existence, nor about the continued existence of others of its kind...
  • Yeah, I'm being mean to all the morons who just a couple years ago were talking about uploading your consciousness to a machine because "the brain's...".

    I mocked you then and I'm mocking you now.
    • Indeed. (Score:4, Interesting)

      by Brain-Fu ( 1274756 ) on Saturday September 04, 2021 @12:06PM (#61763103) Homepage Journal

      Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia [wikipedia.org]. Whether our scientific models have room for authentic subjective experience or not, that is my "level zero" reality and all the scientific understanding that I do have was facilitated to me by means of it.

      Do computers have qualia? If we replace the perceptron with something more sophisticated, will it have qualia then? If my brain is mapped, neuron-by-neuron, into a computational model which passes any Turing test we can invent, including adamantly insisting that it has qualia, will it actually have qualia?

      I wish these questions were answerable. I think the prospect of digital consciousness is wildly tantalizing. But there isn't any objective way of detecting qualia in anything. For all I know, I am the only person in the universe who actually experiences it, and all others are just bio-mechanical zombies that talk about it because they were programmed to.

      • If my brain is mapped, neuron-by-neuron, into a computational model...

        First I might recommend figuring out what neurons **do** as well as how.

      • For all I know, I am the only person in the universe who actually experiences it, and all others are just bio-mechanical zombies that talk about it because they were programmed to.

        I asked myself that once, and after mulling it over conversationally in my head and after determining all 8 billion humans were just meat zombies, decided to go just one further.

      • Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia.

        So how do you tell the difference between experiencing it, and being programmed to think you're experiencing it?

        • To answer that, I must draw an important distinction.

          If I am looking another person in the eye, and that person tells me "I am experiencing this or that," I have no way of knowing whether that person is actually having an experience, or is just making noises because they are a zombie that has been biologically "programmed" to say "I am experiencing this.".

          That is to say, there is no way to test for "consciousness" in another person.

          When I am reflecting on my own thoughts, the story is different. My experie

          • I have no way of knowing whether that person is actually having an experience, or is just making noises because they are a zombie that has been biologically "programmed" to say "I am experiencing this."

            Why are you so sure that there's a difference?

            • with a computer. Nobody will know the difference. Arthur: I will! Mice: No you won't, you'll be programmed not to.

              Adam's nailed it pretty well.

              The big thing is that it does not matter if computers experience conciseness in our sense of the word. It only matters that they behave intelligently. And in any case our own sense of conscious control is rather vague.

          • The best I can do is make some practical inferences like...when I experience pain I say ouch, and when other people receive injuries (that would hurt me) they also say ouch, so probably they are also having experiences.

            Maybe this is the best you can think of but it’s not the best we can do at all. You assume that your brain isn’t subject to physics and that physics isn’t the same everywhere. With maximal understanding you could actually see the standard model of physics completely enact those sensations, responses, and emotions, in real time too if your technology and sensors were good enough. Thus it is empirically possible to export that to other systems using your own validation for those systems.

            • I think you are looking for a disagreement where there isn't one.

              For example, I don't assume "my brain isn't subject to physics" nor that "that physics isn't the same everywhere." Nor am I asserting that "meat computers are magic". The entire thrust of your post goes beyond the much simpler point that I was making.

              When our sense organs are stimulated, neurons fire. That's a readily observed fact, as are several other related facts about the brain, and I am not challenging any of them. You could maybe ho

      • I can say with confidence that I experience qualia

        You can say that, but you can't prove it. Which makes your claim as valid as an AI entity stating the same.

      • by gweihir ( 88907 )

        Well, current Physics has no place for qualia. It is just not part of the theory at all, hence purely physical objects cannot have it according to current theory. There are some quasi-mystical bullshit attempts to try to explain this away, but they are just nonsense. The fact of the matter is qualia is real and we have absolutely no explanation for it and no clue what it is or how it works.

        As to computers, computers cannot have qualia (again, there is no mechanism for it), unless qualia is completely passiv

      • Science might model me as an empty mechanism responding predictably to stimulus, but from "in here" I can say with confidence that I experience qualia [wikipedia.org].

        That's exactly what a philosophical zombie would be programmed to say!

    • by gweihir ( 88907 )

      Yeah, I'm being mean to all the morons who just a couple years ago were talking about uploading your consciousness to a machine because "the brain's...".
      I mocked you then and I'm mocking you now.

      I am with you in that. No idea how these people ever got that demented idea, but looking at some other things currently happening it really is true that the human capacity for stupidity is unlimited.

  • by SlithyMagister ( 822218 ) on Saturday September 04, 2021 @11:54AM (#61763077)
    I've known people who had many neurons (presumably) but very little coherent thought
  • Every neuron is a little compute device
    • "Every neuron is a little compute device"

      Yes, each one made up of nucleic acids, carbohydrates, lipids, about 40 million proteins and 100 trillion atoms.

    • The complexity of each neuron is more akin to a large petrochemical plant.

      • by gweihir ( 88907 )

        The complexity of each neuron is more akin to a large petrochemical plant.

        That is underestimating it rather grossly....

        • Overestimating it, I think. If you'd like to go back to the original papers, I'll recommend the often cited paper by McCulloch and Pitts, in 1943, about the computational complexity of neurons.

          https://www.cs.cmu.edu/~./epxi... [cmu.edu]

          It's stiff reading, but it's the foundation of a lot of later work in neurophysiology and, unsurprisingly, in AI. The bits about local feedback loops, and understanding them, are very insightful. The grandiose philosophical claims at the end are very insightful as well.

  • by kot-begemot-uk ( 6104030 ) on Saturday September 04, 2021 @12:04PM (#61763101) Homepage
    One of the most interesting properties of neurons is long term changes. They "rewire" themselves - connections to other neurons change. There are other changes at biochemical level as well. So there is even more complexity involved. All of this goes to show that we are still decades away from being able to mimic moderately complex brains like the ones in birds and larger mammals.
    • Re: (Score:2, Interesting)

      by burtosis ( 1124179 )
      Neurons have extensive chemical signal pathways [gatech.edu] as well which aren’t electrical potential pathways at all. Looks like we’re going to need that new architecture then another 5-8 layers of simulation before we get close to a single neuron. Then it’s a simple measure of just running about 80-90 billion neurons in parallel. Should be a snap!
      • by SpinyNorman ( 33776 ) on Saturday September 04, 2021 @01:53PM (#61763423)

        The largest artificial neural net models are actually getting quite close to human "brain size" in terms of complexity.

        As you say, the human brain has around 80-90B neurons, and the cortex alone (main part responsible for intelligence) has around 125T synapses (inter-neuron connections).

        The ANN equivalent to synapses are it's connection weights, aka parameters, and these are largely what limit their size - the amount of data and compute needed to train them. One of the largest models from a US company is OpenAI's GPT-3 which has around 175B parameters (synapses) and China'a similar Hua/Wudao 2.0 has about 10x that, so about 1.75T "synapses" !! These things are growing very fast! The "AlexNet" model that started the "deep learning" revolution in 2012 only had 62M parameters, so that's a growth factor of 10,000 in less than a decade. We (well, Google, or OpenAI) could be running human-like 100T parameter models in a year or two assuming this keeps up.

        The real thing stopping us from simulating a human brain, or building a human-inspired AI of similar complexity is that we simply don't know enough about how the brain is wired (what the detailed architecture is) and how it learns. If compute was the only issue, we should at least be building rat and birds brains by now, but of course we have no better idea how those work either.

        I tend to think that when we do figure out how to design brain-inspired intelligent architectures, we'll be able to do it at a much higher and more efficient level than synapse-by-synapse simulation, maybe at the functional (vs synapse) level of simulating entire cortical mini columns, or perhaps more likely the first intelligent architectures will be extensions of things like GPT-3 that really arn't based on the human brain at all (other than being based on ANNs).

        • The thing is we could probably do a rat brain. Rung the body's systems, look for food, etc.. from the outside you might not be able to tell, but on the inside it would still be a computer mimicking what it saw a rat do. It wouldn't have a sex "drive", it would just mimick one.
          • Serious question: Do you think your sex drive is any different than one that could be built into a robot (should anyone be inclined to do so)?

            Our sex drive is purely mechanical and genetically determined. You see curves and jiggle, you get radar lock and a boner!

            How you feel/think about what's going on is another matter - but that's just your brain after-the-fact coming up with explanations for what your body is doing. A robot with AGI and a hardwired (cf genetically determined) sex drive might similarly re

        • by burtosis ( 1124179 ) on Saturday September 04, 2021 @05:36PM (#61763947)

          The ANN equivalent to synapses are it's connection weights, aka parameters, and these are largely what limit their size

          Yes, but it’s not clear that we are properly capturing all of the important function from them and aren’t missing a thing or two.

          The real thing stopping us from simulating a human brain, or building a human-inspired AI of similar complexity is that we simply don't know enough about how the brain is wired (what the detailed architecture is) and how it learns

          Absolutely, it’s been fairly clear for perhaps 20 years at least that we would be limited by our understanding and approach and not hardware. The big focus lately has been to try and throw the most computing power or the most detailed and accurate sensors (for physical applications anyhow) when it’s pretty clear that actual living creatures make due with rather poor sensors, incomplete information, and often (for the simple ones) even a lack of raw computing power. It’s my opinion that it’s hubris to think we can reverse engineer over three billion years of evolution in only about a hundred years of computing technology, but it’s also hubris to think that somehow meat computers are magically special and can’t ever be artificially copied in any way and retain the core essence of what it means to be sentient. 5,50, or 500 - it will happen if we keep progressing.

          • Different hardware. Different constraints. Engineered, not evolved.

            Sure, we learn from nature, but we will engineer our own thing in our own way.

            Artificial neurons are quite different from natural ones. Much simpler. Much faster. And there is much more to AI than just artificial neurons.

          • Agreed.

            I'll be a long time before we're not only able to build artificial brains, but also to have them even remotely as efficient as a bio-brain. Compare to how long we've been building CPUs and yet asychchronous data-flow designs are still a research project.

            There are similar attempts underway to build spiking neural net brain models, maybe partly based on the same asynchronous/dataflow (vs clocked) efficiency goals, although it rather smacks of cargo-cult design (let's add spikes - that'll make it intell

    • Another hugely interesting property is the timing aspect. Synapses are known to respond not just to the presence of action potential but the inter-spike timings.

    • Do you have an electronics background and understand how and FPGA (Field Programmable Gate Array) works? You download the 'configuration' you need into the device so the inputs, outputs, and internal processing works the way you want it to work. Now imagine a completely linear electronic version of that: instead of digital logic gates and devices inside the device, it's all linear electronic circuits like op-amps and discrete components, and it's all configurable in whatever ways you need it to be -- but al
    • by gweihir ( 88907 )

      This is not intended to account for everything. It is intended to show things are a lot more complex than expected. Well, expected by morons, but you find a lot of these in the sciences, people very capable in a narrow sub-specialization and completely clueless outside of it.

    • All of this goes to show that we are still decades away from being able to mimic moderately complex brains like the ones in birds and larger mammals.

      Yah, I'll say... at least, but moderately complex? Our sense of scale is waaaay off on this. I'll tell you right now, this is going to be another flying car thing where people just don't grasp what needs to happen between now and some point in the future we could definitely get to, but with enough time. We're not at the point we can call the mind of a predatory insect or a jumping spider "simple" yet. We've just barely scratched the surface of how living things do any kind of pattern recognition, and th

  • What is the obsession with biomimicry? I mean, biological systems sucks. They have many constraints. Evolution is a haphazard deal. If we Karl Benz had focused on biomimicry to build an automobile, our fastest vehicle would be no faster than a cheetah and tire out in 20 seconds. Aeroplanes would never carry humans and rockets wouldnâ(TM)t even exist. While it may suck at first, synthetic will exceed bio. Buildings would be made out of calcium instead of steel and never exceed a couple of storeys. We sh

    • What is the obsession with biomimicry? I mean, biological systems sucks.

      Because the brain still does things computers can't do.

      • by gweihir ( 88907 )

        What is the obsession with biomimicry? I mean, biological systems sucks.

        Because the brain still does things computers can't do.

        Even worse: A smart human person (and no, we do _not_ know whether it is just the brain, that is conjecture) can do things we do not even have a credible theory for that could explain how it could be done with computations given the limits of this physical universe.

        • A smart human person (and no, we do _not_ know whether it is just the brain, that is conjecture) can do things we do not even have a credible theory for that could explain how it could be done with computations given the limits of this physical universe.

          Which sorts of things are you thinking of here? I like seeing these kinds of examples.

    • Are your an idiot or what? Airplanes were based on birds. Full biomimicry. We can only optimize AFTER we understand the mechanism of how intelligence emerges. What you say is valid only if we fully understand the laws of nature concerning said field. Which means it's currently invalid for neural nets, because we don't know how to make an intelligent machine. Our only hope now is to mimic the human brain. Once we have our first prototype of that based on biomimicry, we can try and optimize.
      • What about automobiles? We abandoned walking propulsion because it was too hard, and then did a hell of a lot better. Only recently are we able to make walking machines. Airplanes -- we ditched trying to mimic flapping and feathers and did better. And rockets, ..even though jellyfish do it .. I am sure humans discovered the reaction principle on their own outside of biology .. and that has taken us to the moon. Staying boxed into biomimicry isn't always optimal. Neural nets may not be the most optimal way t

        • by narcc ( 412956 )

          Neural nets may not be the most optimal way to do intelligence

          Optimal? It's very obviously not a way to "do intelligence" at all.

    • If we Karl Benz had focused on biomimicry to build an automobile, our fastest vehicle would be no faster than a cheetah and tire out in 20 seconds.

      Nonsense!

      Aeroplanes would never carry humans

      Bollocks!

      and rockets wouldnÃ(TM)t even exist

      Poppycock!

      A running animal's gait can be described in terms of circles. The shape of an airplane's wing was developed from the wings of birds. There are sea creatures which propel themselves by expelling liquid in the direction opposite their intended direction of travel, and insects that literally boil liquid and squirt it out.

      While it may suck at first, synthetic will exceed bio

      What synthetic does is you can make things any shape you want, not just some shape you've discovered accidentally. This turns out to be a total game-changer.

    • by gweihir ( 88907 )

      What is the obsession with biomimicry?

      Simple: The other approaches have soundly failed to produce anything that even remotely resembles intelligence, despite a lot of effort invested. But this one will likely too. It is just more complex and less understood, so it will take longer for the failure to become obvious.

  • by serviscope_minor ( 664417 ) on Saturday September 04, 2021 @12:54PM (#61763249) Journal

    They showed that a deep neural network requires between five and eight layers of interconnected "neurons" to represent the complexity of one single biological neuron.

    The universal approximation theorem states that an arbitrarily wide 3 layer network can approximate any function to arbitrary precision. The rub is in how wide is arbitrary. And wide, shallow ANNs are harder to train it seems than deeper, narrower ones. I'd like to know exactly what they mean, but "Neuron" is paywelled. :(

    • Here’s [biorxiv.org] the pre-print for free.
      • Thanks! I hope you don't mind an unedited stream of consciousness.

        This is very bio-paper. Fortunately having done some work in biology I'm a bit (but not very practised) at reading them. Much harder for a DL person to read than a bio person for sure. No mention of an ablation study by name, so hard to find where they pare back bits to see what the minimal necessary is.

        Also being a bio paper about deep learning, the paper is all bio, and like NONE of the details of the deep learning are in the paper. They're

  • Your brain is not a computer. It contains no data, it performs no calculations. That's simply not how brains work.

    It is human nature to compare the working of the brain to the most advanced technology of the time. Back in the Bronze Age, that was weaving, ceramics, and distillation. Thus, humans were made from clay and animated by the spirit/breath of God. The fact that the visible vapors coming off of distilled spirits are flammable is where the imagery of the soul as an eternal flame comes from. Think Pe

    • Your brain is not a computer. True.
      It contains no data False.
      it performs no calculations False.
      That's simply not how brains work. False.

      I am not sure what you are on about, but what you said is easily demonstrated to be false. For example, I have my phone number memorized. That's data, and it is stored in my brain. What would you challenge about this? Do you think it is stored in my toenail? Or that a phone number somehow doesn't qualify as "data?" What in the world else would it be???

      I can do ment

      • Computers are made of silicon and such, and have no biological components at all.

        The point is not what a computer or a brain is made out of (silicon or biological bits). What matters is what the brain does, compared to what a computer does. I think the idea is that most of what the brain does is not what one could class as computing, or at least, that is yet to be demonstrated. The fact that human brains can do recognisable computation does not mean that computation is what the brain does most of the time.

        I design a fair bit of analog electronic circuitry. There is no actual computing d

    • by gweihir ( 88907 )

      Actually, at least living human brains with help from an attached consciousness can usually store data and can perform computations. They are not very good at it, admittedly, and it is clear a lot of translation to other mechanisms is needed to do it at all. And absolutely nobody knows how that "consciousness" thing works, what it is or how it interfaces with a human brain. All we know is that it is essential and that a human brain without it can only do very basic house-keeping stuff.

      Physicalists: Stay out

      • Define consciousness. The entire concept of consciousness is a philosophical leftover from Christian dualism. It is a replacement for the concept of a soul to distinguish humans from animals. Which is nonsense from an objective perspective.
        • by gweihir ( 88907 )

          Define consciousness. The entire concept of consciousness is a philosophical leftover from Christian dualism. It is a replacement for the concept of a soul to distinguish humans from animals. Which is nonsense from an objective perspective.

          You have it ass-backwards. We find consciousness to exist. Hence we cannot define it. We can only define the word for it and for that I direct you at many nice definitions you can find online. Incidentally, you are wrong. The recognition of consciousness is far older than the Christian religion and far more universal than their limited ideas.

          • The idea of consciousness predates Christianity. Your conceptualization of consciousness is directly derived from Christian thought. "Cognito ergo sum" is a statement from a faithful, fervent Christian who was an affirmed dualist. It is a sentiment you literally just paraphrased. All scientific investigation has found that the perception of consciousness to be an illusion. One that is quite likely culturally determined.
    • It's not a Turing machine, but I'd urge you to study any of the good papers on how neurons work that compare them to individual computational elements. I've already cited this one:

              https://www.cs.cmu.edu/~./epxi... [cmu.edu]

      • Well, for starters, both the authors listed on that paper died in 1969. And the paper itself was published in 1943. So, not the most up-to-date research. That paper is famously wrong about the fundamental functioning of neurons.
        • Can you cite a single fundamental error? It is, indeed, nearly 70 years old. Microscopes and electronics have improved since then, so there was certainly room for refinement. But the ideas of small feedback loops interacting to create quite sophisticated sensory or processing systems seems well established.

          • Correcting myself, the paper was nearly 80 years ago.

          • First of all, there is no evidence at all, that our brains would utilize anything similar to a loss function with a single output, nor backpropagation. These are the essential elements of all neural networks, no neural network can train itself without these. Most neural networks have layers too: no evidence at all that our brain would arrive to conclusions using similarly layered structure. Oh well, you could say, they do have “neurons”, right? Ok, but they are completely different, and if you actually use this comparison for anything, you get into trouble.

            https://medium.com/@zabop/why-... [medium.com]

            Here's a paper from this century. [hilarispublisher.com]

            • Great: Where did the original paper I created say there is no backpropagation, no feedback? The models didn't provide extensive feedback because feedback often occurs at higher layers of mentation, layers too complex to model for such a restricted element. The response to a cold sensation relies extensively on local processing, but the model need not include the higher level behavioral feedback of "why did someone expose their flesh to cold".

              > Most neural networks have layers too: no evidence at all that

  • It is not "deep" in the things it can do. It is just deep in the net used which means it can be made generically and trained more. It will still give worse results than a shallow, custom-designed neural net. We are already seeing severe limitations in what training can do, and the "deeper" the net, the worse these get. Even carefully trained networks can be tricked and exhibit surprising behavior. It is time to acknowledge that neural nets are a nifty special-purpose tool with specific limitations, but not

    • You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.

      https://www.youtube.com/watch?... [youtube.com]

      GPT-3 is just a powerful type of "language model". It can only do one thing - predict the next word in some text that you feed it. Of course you can then feed the word it predicted/genretated back into itself and have it predict/generate the *next* w

      • by gweihir ( 88907 )

        You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.

        Not really. With some more relevant background (which I have) these are nice tricks, but not that impressive. For example, the average entropy of an English word in a text is just around 2.5 bits, so guessing the next word is not actually that hard. Of course the guessing errors pile up and eventually the whole thing descends into chaos and nonsense, because GPT-3 has zero understanding of what it does and zero understanding of the real world. Scaling up "dumb" does not remove the dumb. It just makes it a b

        • Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.

          Your argument is like saying the laws of physics can never produce intelligence if you scale them up. But they do.

          You could also argue that humans doing "intelligent" stuff like designing rockets, or whatever, makes it hard to see that we'e really just a mass of dumb old chemicals. And you'd be right.

          You either didn't watch the OpenAI Codex demo video I linked, or you didn't understand what you w

          • by gweihir ( 88907 )

            Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.

            Pure conjecture. And not even a very convincing one. Physicalism is religion, not Science. Since it is conjecture, any conclusions drawn from it are also conjecture.

            Here is a hint: You cannot use an argument by elimination unless you have a complete and accurate model of the system giving you _all_ possibilities. Currently known Physics is _known_ to be incomplete and that makes this approach invalid as a proof technique. This is really very basic.

            As to your invalid AdHominem-type argument, I have been foll

            • > Pure conjecture. And not even a very convincing one.

              The last 50 years of neuroscience would beg to differ. The biochemical operation of neurons at this point is understood in fairly exquisite detail, and it's just good old fashioned chemisty.

              I've no idea what you mean by the laws of Physics being incomplete ... no doubt there are new discoveries and to be had, but nonetheless the behavior of the universe, and everything in it, is *perfectly* predicted by a combination of quantum theory and the theory o

  • ML= layer complexity DeepMind NN 5X-8X == HumanNeuron what if AI = 10X then if 100X==MATRIX complexity does1000X approach universe understanding exigencies?

  • Could the information space can be defined as a probable number of discrete states between 2 to the 16th power and 16 factorial (We don't actually know how order dependent these states are)?

    I don't know how else we'd measure this. In aggregate, I guess this could be considered the "Neuron" layer of information processing.

    Am I wrong on this?

  • They developed what they believe is a reasonable model of a spiking neuron (and we don't know the coding of spike trains by the way), and compared it to a DNN (continuous and actually not as well understood as you'd like to think). From that they arrive a grand conclusion about complexity between real neurons and DNN's.

    Not only is that comparing apples and oranges but the apples aren't even real. It's like comparing a painting of what you think apples are like to real oranges ... and then concluding that

A Fortran compiler is the hobgoblin of little minis.

Working...