Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Hardware

Can We Make Computer Chips Act More Like Brain Cells? (scientificamerican.com) 58

Long-time Slashdot reader swell shared Scientific American's report on the quest for neuromorphic chips: The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb. Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft "neuromorphic" computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. "Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them," says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021 Annual Review of Materials Research. Salleo and other researchers have created electronic devices using these soft organic materials that can act like transistors (which amplify and switch electrical signals) and memory cells (which store information) and other basic electronic components.

The work grows out of an increasing interest in neuromorphic computer circuits that mimic how human neural connections, or synapses, work. These circuits, whether made of silicon, metal or organic materials, work less like those in digital computers and more like the networks of neurons in the human brain.... An individual neuron receives signals from many other neurons, and all these signals together add up to affect the electrical state of the receiving neuron. In effect, each neuron serves as both a calculating device — integrating the value of all the signals it has received — and a memory device: storing the value of all of those combined signals as an infinitely variable analog value, rather than the zero-or-one of digital computers.

This discussion has been archived. No new comments can be posted.

Can We Make Computer Chips Act More Like Brain Cells?

Comments Filter:
  • it can process information a thousand times faster than the fastest supercomputer

    Lets add up a column of a million numbers or multiply two 128x128 matrices and see who wins.

    • Re: (Score:3, Insightful)

      Depends on the information.

      A 5 year old can better process visual information than the combined compute power of all the silicon on the planet. She can't multiply 3x2.

      That same 5 year old also understands her native language better than any computer. She actually understands. The computer is just pushing bits according to some rules it was programmed to follow. The child is actually learning for real.

      But if you want your matrix multiplied, sure, get out your laptop. I think what the kid knows at 5 is m

      • A 5 year old has no relationship whatsoever to these chips. That's what this scam is all about. The most that can be said for it, is it's an exploration of how neurons might work.

      • by Anonymous Coward

        Remember back when Tesla announced their custom AI chip? And then Elon Musk initiated the Neuralink implant project?

        The downside to making a computer chip more like brains is that there is a risk of the chip going bezerk, downloading conspiracy theories, and outputing "MAGA, MAGA, MAGA..."

        "Hate to say it, but we need to increase oil & gas output immediately. Extraordinary times demand extraordinary measures." -- Elon Musk

      • by cstacy ( 534252 )

        Depends on the information.

        A 5 year old can better process visual information than the combined compute power of all the silicon on the planet. She can't multiply 3x2.

        You don't think there's any multiplication or array processing going on in the vision pathway?

        • I couldn't say how a 5 year old actually processes visual information. My lead professor in school was a vision and childhood development specialist. He couldn't say either. My guess is there is no matrix multiplication going on in her brain to do it, no.

          And given essentially unlimited matrix compute power, your laptop still have no idea what it is "seeing" through it's camera. No understanding at all.

          If I need a simple task completed, "give your cup to mommy", I'll put my money on the 5 year old every

          • by Rhipf ( 525263 )

            If you let a computer spend 5 years learning who mom is would it also be able to react to your instructions as easily as the child?

            • I would suggest not. This was an interesting video I just finished watching before I saw your reply just now.

              https://www.youtube.com/watch?... [youtube.com]

              If you don't want to see it, summary version: guy is using AI to play a car race game. Even after starting over to reset bad learning, and then implementing all sorts of fixes to the learning methodology, the best it can do is about 6.5 minutes vs his personal human run of about 4.75 minutes and he's just some guy not a video game master.

              Why wouldn't 5 more years ma

            • Why are you giving the computer more time and a lesser learning task than the child? Children typically understand cup, get, mommy and water by two.
              • by Rhipf ( 525263 )

                Fine, then train the computer for 2 years instead of 5. The point is that the child isn't instantly able to recognize the command. The child has had years to learn what the commands are. Do I really think that the computer would be able to do it after 2 years of learning? No I don't but I think it is more a problem with inability to program the AI properly more so than the innate difference in the computational abilities between humans and computers.
                Do humans really even compute anything when they are taske

          • Our best understanding is that the brain does in fact work on something very like matrix multiplication.

            Each neuron has thousands of inputs - each one weighted differently, and when the cumulative weighted sum (aka time-averaged dot product) exceeds a threshold, it outputs a spike to the thousands of neurons listening to it.

            The reality is a lot more complicated than that, but the *simplified* model is billions of asynchronous matrix multiplications occurring within the brain at any given moment.

            That said -

          • We're getting there slowly. We have self-driving cars that are getting better and better and a lot of robots that can do some basic environment navigation. The problem is that no one can really tell you how that works. Sure we can tell you how we built a neural network and what kind of training it's been using and we can pull it apart and look at the state of it, but that we couldn't tell you why that particular state is effective or how it should be modified to improve it in some precise manner.

            I think
            • From your link:

              "The OpenWorm project has mapped the connections between the worm’s 302 neurons and simulated them in software. (The project’s ultimate goal is to completely simulate C. elegans as a virtual organism.) Recently, they put that software program in a simple Lego robot."

              That's good, the mapping of that level too.

              > I think we'll gain a better understanding by starting with simpler brains and emulating those. We've already done it with an earthworm...

              The understanding of that brain i

      • Depends on the information.

        Then the summary shouldn't have made an absolute statement.

  • No (Score:5, Informative)

    by iAmWaySmarterThanYou ( 10095012 ) on Saturday September 03, 2022 @11:47AM (#62849231)

    We have no real idea how the brain works. We do understand a little bit here and there around the edges but the core stuff? Not even close. I took my neuroanatomy and cognitive science classes with some of the world's foremost researchers in their fields. They openly admitted no one knows wtf about the brain at a deep level. I still occasionally keep up and read some this n that online. Not much progress has been made since then. Still nibbling.

    • The headline is a bit sensationalistic. The paper referenced has a more tame headline: "Materials Strategies for Organic Neuromorphic Devices."

      There are two things going on here: one is research into brain-silicon interfaces, and how to do it. This is a reasonable line of research.

      The second thing is processors optimized for AI sorts of operations. Right now we mostly use GPUs, but as we saw with Google's TPU, there is plenty of room for optimization in silicon. There are a number of startups trying to buil

      • Well, just a quibble.

        I'd differentiate optimized matrix multipliers, like Google's TPU, etc, from efforts to build asynchronous spiking ANNs.

        I guess you could regard spiking ANNs as at attempt at power optimization rather than performance, but this is highly experimental and hardly "bog standard optimization".

        There are also efforts like Blue Brain where the goal seems to be more research than optimization - seems a bit like cargo cult science to me!

        • I don't see how building an ANN with asynchronous neurons gets you anything.

          • It's the same power reduction benefit as building a general purpose CPU that operates asynchronously (via data-flow principle) vs synchronously (clocked).

            In a synchronous (clocked) design, every element on the chip has to have the clock provided as an input, and then only update it's output on the clock edge. With today's massive, and high frequency, chips, it turns out that routing the clock source to literally every part of the chip, and providing enough power to drive all these sinks simultaneously, is a

            • Ok, I get your analysis, but I'm still not sure what you are trying to say. Are you saying that clockless architectures are not bog standard optimizations? I grant that they are a rather more exotic optimization.

              • Yes - this is still very much a research area, not an engineering one!

                Despite the theoretical gains, even for a CPU there are not any great success stories to point to. The ARM AMULet is the only asynchronous CPU that I'm aware of, and it's power savings seem pretty marginal.

    • More depth (Score:4, Informative)

      by Okian Warrior ( 537106 ) on Saturday September 03, 2022 @12:21PM (#62849309) Homepage Journal

      We have no real idea how the brain works. We do understand a little bit here and there around the edges but the core stuff? Not even close. I took my neuroanatomy and cognitive science classes with some of the world's foremost researchers in their fields. They openly admitted no one knows wtf about the brain at a deep level. I still occasionally keep up and read some this n that online. Not much progress has been made since then. Still nibbling.

      We know a lot about how synapses work and how neurons fire, and that's because we can look at neurons from simpler animals and take neurons out of their environment and study them in a petri dish.

      What we *don't* know, and what would allow us to make an AI, is how the neurons are connected together and how that connection happens.

      To draw an analogy, consider studying a modern CPU chip: we know how PN and NP junctions work and how transistors can amplify signals, and with no more information we're trying to build a modern computer by wiring these items together.

      Lots of AI work is based on a "guess" of how neurons connect, which has led to the ANN (artificial neural network) solutions, which have an input layer, some hidden layers, and an output layer. You've seen diagrams of these, and lots of AI systems are based on this - including OpenAI and TensorFlow. This is trying to make a brain by connecting neurons together.

      The problem is, neurons in the brain have roughly 10x more feedback than feed forward, the brain doesn't have an input->process->output structure (meaning: the inputs and outputs come in together on "the left"), and the brain has a clear hierarchical structure that's not in the standard ANN model. Also, the brain isn't a single organ, it's a large set of mini organelles each with their own function. We 'kinda have an idea of what these structures do (in an "is related to" sense), but not explicit knowledge of how they do it, or how they are connected.

      The fact that the ANN feed-forward model is better at solving some problems than naive solutions has been a detriment to the field. They've been pushing that model so far that they've reached the limits of what it can do, without actually being a solution to strong AI.

      • Worth a citation of How to Create a Mind by ye olde Ray Kurzweil? He writes about brain modules that can be regarded as corresponding to flip-flops in computer engineering, though I can't recall him using that specific word.

        Leading into a sort of (typically weak) joke on the topic:

        "I yam whats I yam" versus "I yam whats I does"? Recent discussion with a real computer scientist who is very much on the does side...

        But I think "All whats I yam" is neuronal networks, especially the persistent or frequently tr

      • Exactly right. The systems we have built to "mimic" the brain are nothing like the brain, what little we know of it.

        Sure the underlying structures like synapses but that's just infrastructure and could have been replaced by some other mechanism, in our case, silicon or whatever wires. How all those synapses and neurons are put together talking to each other and the variable and unknown effects they have on each other is the hard unknown part, as you said.

        So until we get a grip on the big stuff we simply c

      • We know a lot about how synapses work and how neurons fire, and that's because we can look at neurons from simpler animals and take neurons out of their environment and study them in a petri dish.

        Kind of, but there is a relentless flow of new information that keeps increasing the overall complexity. Some examples:
        Traditionally dendrites were too small to be electrically monitored for their activity, more recently they found that dendrites have complex inter and intra dendrite spiking prior to signals being forwarded. These signaling processes are non-linear and functional.

        One of the things we now know about the synapse is that it's modulated by astrocytes (tripartite synpase). The astrocyte de

      • We're actually making good progress on how real neurons are connected - we've got researchers working on "connectome"-mapping - the one I saw (years ago) I think used stacks of high resolution scans of extremely thin brain slices to map every single neurons and synapse in a brain. Or at least part of a brain - I think they were working on accurately mapping and simulating some of the more "modular" and well understood parts of a mouse brain.

        I think they were also mapping synapse strength as approximated by

      • I disagree with the last part, about being a detriment to the field.

        Like in some specific cases yes and of course there's tons of irritating fashion chasing, though this happened pre AlexNet too.

        ML used to be all about the statistical models and convexity, and a shit ton of maths around that. Diversity is neat: you can guarantee finding the true global minimum. Trouble is no amount of maths makes the world convex. All those things were hacks to get a convex approximation to the world.

        DL has shown pretty de

      • Lots of AI work is based on a "guess" of how neurons connect, which has led to the ANN (artificial neural network) solutions, which have an input layer, some hidden layers, and an output layer. You've seen diagrams of these, and lots of AI systems are based on this - including OpenAI and TensorFlow.

        You're talking complete gibberish ...

        OpenAI is a company, not a specific neural network architecture. One of the more famous things to come out of OpenAI is the GPT-3 which has a complex transformer architecture, not the type of "input/hidden/output layer" vision architecture you're referring to.

        TensorFlow is also not an architecture. It's a neural network framework from which you can build any type of neural network architecture you you want. FWIW, it's also increasingly irrelevant, having been mostly repl

    • Same here. But, a lot of progress has been made in the past 50 years. It used to be that nearly nothing was understood about how the brain works. Now, slightly more than nearly nothing is understood.

    • by gweihir ( 88907 )

      Indeed. We have no clue how a brain works at this time.

    • True; Computer chips cannot embed https://twitter.com/DividendGr... [twitter.com]

    • We don't need to know how the brain works to "Make Computer Chips Act More Like Brain Cells". Of course the interesting question is whether these chips will lead to better AI/ML algorithms. It's a promising research direction.

      Current deep learning techniques are different than brains in many ways, but were motivated by brain research and have given some amazing results. Now neuroscientists have closed the loop and shown that these networks have some properties related to real brains. It'll be intere

  • Wait (Score:4, Informative)

    by JustAnotherOldGuy ( 4145623 ) on Saturday September 03, 2022 @11:49AM (#62849241) Journal

    "Can We Make Computer Chips Act More Like Brain Cells?"

    Jesus H Christ I certainly hope not.

  • Like most futurist-thinking ideas, "What could possibly go wrong" is a question that we ignore at our own peril.

    Now, don't get me wrong, I'm NOT saying "we shouldn't." I'm saying that even if we can, we should proceed with all due caution.

    • There are more people asking "what could go wrong" than there are actually doing research in the field. We even have completely ignorant tech CEOs asking what could go wrong with AI.

      That is a question that's being asked.

  • anyone got a hammer?
  • FTFA: and do it all using no more energy than a 20-watt lightbulb

    That's like saying "my car was going as fast as another car that was going 80 mph."
    • Not even that. It says: my car was going as fast as another car that *can* go 80 miles per hour.
    • Yeah but was it going as far as a train going at 80 mph?

      Also 20 W bulb? WTF even is that. Yes I know technically they existed, but the minimum common bulb was 40 W for very dim cases and now maybe 8W for really bright cases. 20 W is just in the sweet spot of being rare for either old or new bulbs and so offering no further intuition too.

      I actually have a 20 something watt LED bulb, it is huge and astoundingly bright and rather expensive. Designed for replacing SON lamps, but works great indoors.

  • by gweihir ( 88907 )

    And we (well, the ones of us with a clue) do not want to do so.

  • Can you imagine a bunch of MAGA CPUs deciding fixing the next election isn't good enough, they need to figure out the nuclear launch codes?

    To be honest, Biden got 1 thing right. The MAGA assholes are the biggest threat to this country I've seen in my lifetime. And I'm a 64 y/o white guy who has voted R 75% of the time for 40+ years, but have now started voting D a lot more than feels comfortable.

    Again, don't get me wrong. The only thing Biden has done I agree with is getting us out of Afghanistan.
    • Or a bunch of closet Marxist CPUs attempting to do the same thing? Works both ways.

      "The MAGA assholes are the biggest threat to this country I've seen in my lifetime"
      LOL! Really? A bunch of people who share a political point of view opposite yours? THEY'RE the enemy? You really need to open your eyes.

  • You don't make a 340,000 lb bumblebee.

    Copying nature isn't necessarily the best idea.
    • No, but being inspired by nature, e.g. aerofoil shape of an airplane's wing, is a good idea!

      So, trying to create AI by copying a brain at level of individual neurons doesn't seem a great approach, but being inspired by connectionist architecture rather than trying to create symbolic AI appears very promising ...

      Let's also note than a bumble bee, as odd and clumsy as it may appear, and which we still don't fully understand, has been finetuned by nature for some niche requirements ... I'm sure it's a very eff

  • It is a common misunderstanding that analog systems have infinitely variable signals. This is false, because of noise. When the talk about some "noise floor," that represents that smallest unit that the system can actually represent. That it can wobble around still is just lack of accuracy and/or precision.

    Digital signals actually can represent a larger number of accurate values within the same amount of bandwidth. That's why if you buy an "analog" op-amp it is actually a digital one with the output fed thr

  • The computational efficiency in brains comes from spatio-temporal sparsity and recurrent feedback, features typically missing from âoedeep learningâ AI. The difference between analog and digital is merely an implementation decision. Airplanes donâ(TM)t flap their wings, and itâ(TM)s unlikely that analog computing will make a comeback. In advanced semiconductor processes, analog circuitry is much larger and less well optimized than transistors used for digital circuity, partly because
    • an advantage of analog memory though is that you can do 1,000 parallel additions at a time with just a single wire.
      it's not just storage density, but compute density, too.
      perhaps analog-digital hybrids are going to be the way things go.

  • i'm currently more intereseted in nin-memory analog computing, such as with phase change memory or resistive ram memory. it gives a speedup of over 1,000x a GPGPU and consumes 1/1,000 of the power.

    A big limitation of it, however, is that while it's great for implementing a trained neutal network (forward propagation), i don't see a way to get that kind of 1,000x improvement in the training / back propagation phase.

    it might be possible thought. many years ago i came up with a circuit for an analog adaptive

  • Could we make rope work more like concrete? Yes, but it is stupid. If you want concrete, use concrete. If you want rope, use rope.

  • It works just like the human brain. But the DRM chips in the vodka cartridges need a workaround or I'm going to go broke before it finishes this build.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...