Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science Technology

Building a Silicon Brain 236

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."
This discussion has been archived. No new comments can be posted.

Building a Silicon Brain

Comments Filter:
  • by tehdaemon ( 753808 ) on Tuesday February 13, 2007 @01:39AM (#17993672)

    As far as I know, brains do not use back-propagation at all. Each neuron changes it's own weights based on things like timing of inputs vs output, and various neurotransmitters present.

    If all you want are more neural nets like we have been doing then sure - back-propagation algorithms matter. That does not seem to be the goal here though.

    T

  • What'll be new? (Score:5, Informative)

    by wanax ( 46819 ) on Tuesday February 13, 2007 @02:18AM (#17993916)
    I have to wonder what the purpose is.. You can model simplified 'point' neurons, and various aggregates that can be drawn from them (eg, McLoughlin's PDEs)... or you can run a simplified temporal dynamic (eg. Grossberg's 3D LAMINART), and easily include 200k+ neurons in the model easily to capture a broad range of function. For those would like running more detailed models of individual neuronal dynamics, you have Markram's project simulating a cortical column with compartmental models, or what Izhikevich is doing with delayed dynamic models.

    Although this setup may be able to run ~1mil neurons, in total, it would seem that with 16 chips of 256x256 each, the level of interaction would be limited, and the article has no indication that these are the more complicated (and realistic) compartmental models of neurons that can sustain realistic individual neuronal dynamics (and for example Izhikevich, Markram and McLoughlin have spent a lot of time trying to simplify), or whether this is just running point style neurons a bit faster than is traditional.. and I have to wonder here, whether if these chips can't do compartmental models, why not just run this on a GPU?

    I checked out this guy's webpage, and he seems smart.. but this project is years away from contributing.. I wonder, especially with the Poggio paper yesterday, when the best work being done just at MIT in Neuro/AI right now is probably in the Torralba lab, whether slashdot editors may want to find some people to vet the science submissions just a tad.
  • by tehdaemon ( 753808 ) on Tuesday February 13, 2007 @03:49AM (#17994394)

    Two out of the three replies to my comment thought that I meant 40 cycles was enough per neuron. I guess I was not clear enough.

    40 cycles is nowhere near enough. 40 inputs for a real neuron is small, and 40 cycles would barely let you sum the inputs. To heck with adjusting weights, you can't even run the thing in real-time. The AC I was replying to said that this could be simulated in software at break-neck speed. He is wrong.

    T

  • by naoursla ( 99850 ) on Tuesday February 13, 2007 @04:48AM (#17994686) Homepage Journal
    Now add a bunch of connections between all of those neurons. As you approach fully connecting the network, the time complexity to compute one time-step approaches O(N^2) where N is the number of neurons.

    2^20 * 2^20 == 2^40. Ignore memory cache constraints for a moment and say each update takes 1 clock cycle. Since we are dual core we can get 2 updates per cycle. Each clock cycles takes 500pS. 2^40*500ps/2 means each complete brain update takes 274s on your computer.
  • by kestasjk ( 933987 ) * on Tuesday February 13, 2007 @05:14AM (#17994816) Homepage
    Read "How Brains Think" by William H. Calvin; he's a neurologist and the book goes into lots of detail about how brains think (dur), how they evolved, and the possibility of AI.
    He's an expert in the field and you can feel his bitter dislike of "quantum consciousness" proponents through his writing. He writes that it's just saying "we don't know how X works, and we don't know how Y works, but if we say that Y depends upon X then we have one problem instead of two".

    Consciousness is built on the interactions of neurons. We understand how neurons work at interact at a low level (from studying the ~50 neuron brains of snails etc), and we understand on a large level which regions of the brain do what, but we don't understand the "middle ground".

    It's as if we understand the transistor, and logic gates, and we can recognize which part of a chip is the ALU and which is the cache, but we can't recognize an adder circuit or microinstruction translator for what it is.

    Quantum physics is certainly involved in the action of transistors but it doesn't explain how they combine to process data.

    (On a similar note some I saw, in a documentary, one crackpot explain away "spontaneous human combustion" with an unknown quantum particle.)
  • I don't get it. (Score:3, Informative)

    by God of Lemmings ( 455435 ) on Tuesday February 13, 2007 @10:32PM (#18007020)
    This article tells us absolutely nothing about the design other than that the
    total number of neurons emulated is very small. And no, this is not the "most
    ambitious project yet" by a landslide. It is dwarfed by IBM's own Blue brain project, as well
    as CCortex.

    http://en.wikipedia.org/wiki/Blue_Brain [wikipedia.org]

    The only novelty I see here is that they fabricated artificial neurons on a chip, which greatly
    speeds up the whole thing.

Genetics explains why you look like your father, and if you don't, why you should.

Working...