Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Science Technology

Building a Silicon Brain 236

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."
This discussion has been archived. No new comments can be posted.

Building a Silicon Brain

Comments Filter:
  • obligatory (Score:5, Funny)

    by intthis ( 525681 ) on Tuesday February 13, 2007 @01:23AM (#17993524)
    that's great, but will it run linux?
  • [pinky finger]

    Bet you could train that to do some cool stuff.. assuming it runs in realtime, as advertised, and what kind of back-propagation algorithms are implemented?

    Neat though.
    • by tehdaemon ( 753808 ) on Tuesday February 13, 2007 @01:39AM (#17993672)

      As far as I know, brains do not use back-propagation at all. Each neuron changes it's own weights based on things like timing of inputs vs output, and various neurotransmitters present.

      If all you want are more neural nets like we have been doing then sure - back-propagation algorithms matter. That does not seem to be the goal here though.


      • by QuantumG ( 50515 ) *
        meh, back-propagation is a mathematical simplification of neurotransmitters. You really think these silicon neurons are anything other than mathematical simplifications of organic neurons?
        • Re: (Score:3, Interesting)

          by tehdaemon ( 753808 )

          back-propagation is a mathematical simplification of neurotransmitters.

          No. Correct me if I am wrong, but back-propagation works by comparing the output of the whole net to the desired output, and tweeking the weights one layer at a time back up the net. In real brains, neurotransmitters either do not travel up the chain more than one neuron, or they simply signal all neurons physically close, whether they are connected by synapses or not. (like a hormone) Further, since real brains are recurrent networks (

          • Re: (Score:2, Interesting)

            by Triynko ( 1062726 )
            Yeah, back propagation has little to do with brain circuitry. After reading extensively on neurons and their chemical gates and wiring, it's pretty obviously that basic neural networks that have been implemented look nothing at all like the brain.

            The brain learns by weakening existing connections, not by adding new ones. It's logically and physiologically impossible for the brain to know in advance which connections to make in order to store something... it's more of a selection process. This is also w
            • The brain learns by weakening existing connections, not by adding new ones.
              That is incorrect. An increased number of synaptic connections is a classic indicator of increased usage such as is seen in the hippocampus of individuals who have undergone knowledge based learning.
          • The first type of back-propagation is the term as it is used by computer scientists using neural networks [wikipedia.org]. (This is what you're thinking of.) The second type of back-propagation is the term as it is used by neuroscientists [physiology.org]. Unfortunately, they are two completely different things. As a computer scientist who does brain modeling, this greatly irritates me.
    • Re: (Score:2, Funny)

      by skeftomai ( 1057866 )
      Would this thing do parallel processing?
      • Yes, but it can only keep track of about 7 things at any given time. After that it needs to start to delegate work....
  • so... (Score:4, Funny)

    by President_Camacho ( 1063384 ) on Tuesday February 13, 2007 @01:26AM (#17993562) Homepage
    prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon.

    So how long until we get AI that's addicted to World of Warcraft?
  • by Eternal Vigilance ( 573501 ) on Tuesday February 13, 2007 @01:32AM (#17993624)
    ...to accurately model most American thought processes.

    Gotta go - American Idol's back on.

    Dave, my mind is going. I can feel it...
  • by syousef ( 465911 ) on Tuesday February 13, 2007 @01:36AM (#17993654) Journal
    Lots of silion in Hollywood....oh you said BRAINS not BREASTS.
  • by Anonymous Coward on Tuesday February 13, 2007 @01:49AM (#17993726)
    This is hardly something new. Intel had a chip a number of years ago, called ETANN that was a pure-analog neural network implementation. Another cool aspect of this chip was that the weight values were stored in EEPROM-like cells (but analog) so the training of the chip would not be erased if it lost power.

    But the whole technology of neural networks almost pre-dates the Von Neumann architecture. Early analog neural networks were constructed in the late 40's.

    Not only are these simulations nothing new but they are in every-day products. One of the most common examples is the misfire detection mechanism in Ford vehicle engine controllers. Misfire detection in spark ignition engines is based on so many variables that neural networks often perform better than hard-coded logic (although not always, just like the wetware counterparts, they can be "temperamental").

    There are several other real-world neural network applications (autofocusing of cameras for example).

    Ahh the hidden magic of embedded systems...
    • by rm999 ( 775449 )
      I think you missed the point. While I agree this is not revolutionary, it is different in a few ways:
      -It's a neuroscience project more than a machine learning project (simulating the brain, not a function to be learned)
      -It's trying to mimic the *hardware* of the brain; it's not software written for a general purpose CPU
      -It's probably more powerful

      I frankly think this project is stupid, because it's the connections in the brain that make intelligence, not the neurons. We don't understand the connections and
      • I agree. There is a lot of fuss over neuroscience right now, as if it is the solution to AI. I think not, just as birds were, if anything, a misdirection in inventing the airplane. It would have been natural, 150 years ago, to assume that the very first artificial intelligence would be a model of the brain. That didn't happen, and still shows no sign of happening. Many seem to assume that computer science is not "fundamental" science, but neuroscience is. Why? To me it is obvious that neuroscience ow
        • Re: (Score:3, Interesting)

          by TheLink ( 130905 )
          You are assuming that the brain is just one implementation of a computer.

          And even if it is true, if it's only true in the way that "The universe is just one implementation of a computer" then I don't think that teaches us that much about the brain/mind (it will still teach us something of course).

          Don't get me wrong though, I do agree that computer science and information theory are fundamental sciences.

          And I also agree with you that the first AI wouldn't be a model of the brain.

          I'm no neuroscientist or comp
      • Not only what you've already said, but there is no analog to neurotransmitters in this experiment. They aren't modelling the brain, they are just making a fancy neural net with little resemblance to the nervous system of a living organism.
  • What the heck do you put in the boot ROM for this kind of thing?
  • This is the most ambitions??? What about Markram & IBM [forbes.com]? They must be just fooling around with that Blue Gene (actually I do think they are fooling around, but that's beside the point). What about Izhikvich [nsi.edu]? He simulated just a puny 100 billion neurons. That's *nothing* compare to this "most ambitious" million.

  • Not in this lifetime (Score:2, Interesting)

    by wframe9109 ( 899486 ) *
    The study of the brain is one of the youngest sciences in terms of what we know... But from my experience, the people in this field realize that even rough virtualization of the brain won't happen for a long, long time. Why these people are so optimistic is beyond me.

    But maybe I'll eat my words. Doubtful.

    • Since the experts know so little, maybe we shouldn't put so much on weight on their words?
    • Have to agree with you. I laugh every time I read an article like this or the various ones linked to by posters in which someone claims "they've been doing".

      None of the attempts I've read so far come even close to displaying an understanding of the brain functions, much less actually mimicking them. They always leave out a key component, and one which we do not understand how it influences thought. Hormones.

      The brain is not a simple network, regardless of how many /.ers desire it. It is organic an
    • Biologists thought DNA sequencing of the human genome would take eons, too. Doing it by hand was horrible.

      Then some engineers got interested.

      Now we have gene sequencing machines.

      People are clever when motivated. There's not much of a commercial need for generic AI yet.
  • What'll be new? (Score:5, Informative)

    by wanax ( 46819 ) on Tuesday February 13, 2007 @02:18AM (#17993916)
    I have to wonder what the purpose is.. You can model simplified 'point' neurons, and various aggregates that can be drawn from them (eg, McLoughlin's PDEs)... or you can run a simplified temporal dynamic (eg. Grossberg's 3D LAMINART), and easily include 200k+ neurons in the model easily to capture a broad range of function. For those would like running more detailed models of individual neuronal dynamics, you have Markram's project simulating a cortical column with compartmental models, or what Izhikevich is doing with delayed dynamic models.

    Although this setup may be able to run ~1mil neurons, in total, it would seem that with 16 chips of 256x256 each, the level of interaction would be limited, and the article has no indication that these are the more complicated (and realistic) compartmental models of neurons that can sustain realistic individual neuronal dynamics (and for example Izhikevich, Markram and McLoughlin have spent a lot of time trying to simplify), or whether this is just running point style neurons a bit faster than is traditional.. and I have to wonder here, whether if these chips can't do compartmental models, why not just run this on a GPU?

    I checked out this guy's webpage, and he seems smart.. but this project is years away from contributing.. I wonder, especially with the Poggio paper yesterday, when the best work being done just at MIT in Neuro/AI right now is probably in the Torralba lab, whether slashdot editors may want to find some people to vet the science submissions just a tad.
    • by radtea ( 464814 )
      I have to wonder what the purpose is.. You can model simplified 'point' neurons, and various aggregates that can be drawn from them...

      Beyond that, there is the general argument that the brain is best modelled as a bucket of chemicals with a little bit of neuro-electrical activity on top. Purely neural models miss a vast number of interesting and important phenomena that happen in real brains.

      Consciousness and memory, to say nothing of emotion, are chemical phenomena as much as they are electrical phenomena
      • I doubt you can find a single cognitive attribute that cannot be reasonably explained by the distributed patterns of action potentials in the neocortex and its supplemental structures.
      • Consciousness is just an effect of architecture - the ability of parts of the brain to monitor what some (but not all) other parts are doing... an inward looking sense if you want to think of it in that way. Evolutionally useful since it provides the ability to override and control earlier simpler behaviours / portions of the brain, giving us greater flexibility. Certain types of brain injury support the architectural nature of consciousness - it's possible for example to lose consciousness of vision (i.e.
  • I was under the impression neurons used neurotransmitters to communicate info between two cells but this article implies electrical signals do that. It would be nice to read some text on this subject that tried to explain the abstract difference between what transmits what information.
    • Neurotransmitters tend to be ions which do create electrical signals.
  • I, for one, welcome our new silicon-brain overlords.
  • About the only thing impressive about 1 million neurons is that it is slightly more than the square root of the number of neurons in the human brain.

    Wake me up after the exponential growth has been going on a little while longer and they have made up the 6 orders of magnitude they need to make it worth of the term "brain".

    • You might have a while a longer to sleep then. Because just having the same number of neurons as the brain doesn't mean that you'll have a brain. It is like saying that as long as we can have the four nucleotides from DNA (A,C,T,G) and all the amino acids we'll just throw them together and we'll have biological organisms.

      The brain, does not start a blank slate, it is already pre-programmed to do many things and it is that wiring of neurons and their initial states that need to be decoded.

      In addition to t

  • by TheCouchPotatoFamine ( 628797 ) on Tuesday February 13, 2007 @02:43AM (#17994066)
    For those interested in this field, may i suggest a book, Naturally Intelligent Systems? It's slightly older, but it explains a wide gamut of neural networks without a single equation, and manages to be funny and engaging at the same time. it is one of the three books that changed my life (by it's content and ideas alone - i'm not otherwise into AI). highly recommended: Naturally Intelligent Systems on amazon [amazon.com]
    • by xtal ( 49134 )
      Huge second on that opinion. I picked it up in ~97 and I've read and reread it many times, and is a great reference for anyone interested in expermienting or just general knowledge. Many of the academic texts are not very good at -all-.

  • Why would you experiment with neural logic in hardware when software is infinitely scalable and programmable and arguably more valuable in the reserch of neural networks? Of course software is a degree slower in response time, but speed is not of the essence for researching the "how" of neural nets.

    I would think that in the hardware world, generally you would want a working software model and then duplicate it with the more expensive hardware for performance. The same principal applies when ASIC engineers
    • Re: (Score:2, Insightful)

      The simple reason is that software cannot compute every iteration in parallel. Imagine light beams for instance - if you were to "sum" the intensity of several beams at a single photodiode, it would occur simultaneously as a single operation. Software requires, regardless of the number of possbile processor within reasonable (read: current technological) limits, an iterative approach such that during every stage of calucualtion, each neurode (neuron+node=neurode) has to be caluclated in order - drastically,
  • Why? (Score:3, Funny)

    by Quiet_Desperation ( 858215 ) on Tuesday February 13, 2007 @04:05AM (#17994466)
    Look at the rubbish the human brain generates. Ideology. Irrationality. Depression. Religion. Politics. Reality TV.

    You really want processors that need weekly visits from an Eliza program and iZoloft patches?

    "Sorry, Bob. I can't run those projections now. The supercomputing cluster is in a funk over the American Idol results."

    Y'all think AI is going to be so great and a bag of chips, too.
    • The human brain generates so much rubbish because it does not use mathematical logic, but pattern matching.

      In many cases, mathematical logic can not be used to prove the absolute truth of a proposition; therefore the brain uses pattern matching to 'prove' the 'truth' of a proposition to the degree that is useful for the survival of the entity that carries it.

      Take, for example, the proposition that 'prime numbers are infinite'. We all think they are infinite, but there is no mathematical proof for it yet. Wh
      • Take, for example, the proposition that 'prime numbers are infinite'. We all think they are infinite, but there is no mathematical proof for it yet.

        There have has been a proof for it for a long time. Gettin' wiki [wikipedia.org] wit it.

        Quoting from the link:

        There are infinitely many prime numbers

        The oldest known proof for the statement that there are infinitely many prime numbers is given by the Greek mathematician Euclid in his Elements (Book IX, Proposition 20). Euclid states the result as "there are more than any given

  • by AndOne ( 815855 ) on Tuesday February 13, 2007 @04:34AM (#17994608)
    Having been a fan of neuromorphic engineering for several years now(Note I'm not an active researcher but I pretend somedays :) ) one of the major advantages of neuromorphic functionality isn't necessarily it's ability to model biological systems but the fact that the devices are extremely low power. When modeling neurons in silicon(at least back in the day of Carver Mead's work and for cochlea and retina stuff and I'm doubting it's changed too bunch but I could be wrong) the transistors would run in sub threshhold mode(basically leakage currents so OFF) since the power curves modeled the expected neuro response curves. One of Boahen's stated goals(at least on his website when he was at Penn) was to reduce power consumption and improve processing power for problem solving via these techniques. His lab has been in Scientific America a couple times in the last few years for work in accurately modeling Neuronal spiking in hardware too. I have them but not at hand so I can't cite them at the moment but they were fun reads.

    So in summary, it's more than just modeling the brain. It's about letting biology inspire us to make better and more efficient computing systems.
  • Just to nip that in the bud.
  • I apologise if someone else has raised this point..

    Why are some people intent on making Homo Sapiens obsolete?

    1 Build humanoid robot
    2 build silicon-based superbrain
    3 ??????
    4 Extinction!

  • Paging (Score:3, Interesting)

    by mdsolar ( 1045926 ) on Tuesday February 13, 2007 @08:55AM (#17995874) Homepage Journal
    The article says that the chip will work at 300 teraflops. The human brain might be rated at 100,000 teraflops http://www.setiai.com/archives/000035.html [setiai.com] so there is still quite a lot of speed to make up. However, it seems to me that through state saving (paging) one could simulate the connections between many more that a million neurons using this device. If you virtualize as a cube 3000 deep and track connections between these layers in software then processing over the virtual layers can proceed sequentially. So, it seems as though it won't take all that much more hardware development to get to simulations on the human scale owing to the higher frequency of individual operations.
    Solar, a bright idea http://mdsolar.blogspot.com/2007/01/slashdot-users -selling-solar.html [blogspot.com]
  • Here Here! (Score:3, Funny)

    by LifesABeach ( 234436 ) on Tuesday February 13, 2007 @10:05AM (#17996368) Homepage
    Having hardware that duplicates human thought is an excellent corner stone to help me with my many woes. With Hard Drives approaching the Pico byte range, we will be able to backup our memories; And access vitally important past events. Obviously, there will be many more steps to take before I will be able to access things like my wifes birthday, our first date, and so on. Personally, I will be very grateful for less arguments about past events that I have for some reason or another, considered to trivial to remember.

    "Come back Dear! I'm good with True-False!" - Larry, the Cable Guy
  • The humble computer doesn't need to reach sentience to overpower humanity.

    Every day, hundreds of millions of people have their energy sucked away by computers, in work places and living rooms equipped with game boxes. By the use of bank cards which give the government the ability to 'turn off' our money 'privileges' on an individual basis should they choose. Everybody seems now to have a cell phone. Aside from the mental health concerns associated with having your brain cells randomly stimulated by modul
  • "The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons."

    This already exceeds the connections in the cortex of your average political talk show host.
  • Precise immitation is not the best route to success.
  • ....besides human stupidity, but much better yet, posttramatic stress disorder effects.

    Imagine having an artificial cortex kick in when the real cortex is shutdown by PTSD stimuli.
  • I don't get it. (Score:3, Informative)

    by God of Lemmings ( 455435 ) on Tuesday February 13, 2007 @10:32PM (#18007020)
    This article tells us absolutely nothing about the design other than that the
    total number of neurons emulated is very small. And no, this is not the "most
    ambitious project yet" by a landslide. It is dwarfed by IBM's own Blue brain project, as well
    as CCortex.

    http://en.wikipedia.org/wiki/Blue_Brain [wikipedia.org]

    The only novelty I see here is that they fabricated artificial neurons on a chip, which greatly
    speeds up the whole thing.

System checkpoint complete.