Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science Technology

Building a Silicon Brain 236

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."
This discussion has been archived. No new comments can be posted.

Building a Silicon Brain

Comments Filter:
  • One thing you don't hear much about, is what progress, if any, is being made in interfacing electronic systems into biologic ones, and growing biologic circuits. Perhaps our understanding of biological computation and storage simply isn't complete enough to make such a system practical, even if we were able to somehow interface a clump of neurons to the outside world electronically, but it certainly seems like the data storage capacity of biologic systems is far greater (per mass/volume) than anything devised artificially. Although, I suppose it's impossible to equate, since it's not clear how 'compressed' information is, when it's encoded by the mammalian brain as memories.
  • by Anonymous Coward on Tuesday February 13, 2007 @01:49AM (#17993726)
    This is hardly something new. Intel had a chip a number of years ago, called ETANN that was a pure-analog neural network implementation. Another cool aspect of this chip was that the weight values were stored in EEPROM-like cells (but analog) so the training of the chip would not be erased if it lost power.

    But the whole technology of neural networks almost pre-dates the Von Neumann architecture. Early analog neural networks were constructed in the late 40's.

    Not only are these simulations nothing new but they are in every-day products. One of the most common examples is the misfire detection mechanism in Ford vehicle engine controllers. Misfire detection in spark ignition engines is based on so many variables that neural networks often perform better than hard-coded logic (although not always, just like the wetware counterparts, they can be "temperamental").

    There are several other real-world neural network applications (autofocusing of cameras for example).

    Ahh the hidden magic of embedded systems...
  • by tehdaemon ( 753808 ) on Tuesday February 13, 2007 @02:01AM (#17993806)

    back-propagation is a mathematical simplification of neurotransmitters.

    No. Correct me if I am wrong, but back-propagation works by comparing the output of the whole net to the desired output, and tweeking the weights one layer at a time back up the net. In real brains, neurotransmitters either do not travel up the chain more than one neuron, or they simply signal all neurons physically close, whether they are connected by synapses or not. (like a hormone) Further, since real brains are recurrent networks (they have lots of internal feedback loops), 'back' doesn't mean much.

    T

  • Not in this lifetime (Score:2, Interesting)

    by wframe9109 ( 899486 ) * <bowker.x@gmail.com> on Tuesday February 13, 2007 @02:01AM (#17993814)
    The study of the brain is one of the youngest sciences in terms of what we know... But from my experience, the people in this field realize that even rough virtualization of the brain won't happen for a long, long time. Why these people are so optimistic is beyond me.

    But maybe I'll eat my words. Doubtful.

  • by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday February 13, 2007 @02:01AM (#17993820) Homepage Journal
    I thought Interface [wikipedia.org] was a remotely interesting read.. at least the technological aspects.. the commentary on media dominated elections was just depressing. They extract some neural tissue from a subject, grow a bunch of neurons, interface them to chip with a wireless transmitter, then reinsert them into the brain. Then, with some training, the chip can replace functions of the brain destroyed by stroke or cancer or whatever. The data dump of the communication between the neurons and the chip is the really interesting part.. you could conceivably learn a lot about how subjective experience if you had a human subject and a lot of data. Much more than, say, an EEG or an MRI, as you could record data during normal interaction with the world, 24 hours a day.
  • by Triynko ( 1062726 ) on Tuesday February 13, 2007 @02:24AM (#17993962)
    Yeah, back propagation has little to do with brain circuitry. After reading extensively on neurons and their chemical gates and wiring, it's pretty obviously that basic neural networks that have been implemented look nothing at all like the brain.

    The brain learns by weakening existing connections, not by adding new ones. It's logically and physiologically impossible for the brain to know in advance which connections to make in order to store something... it's more of a selection process. This is also why "instruction" methods of teaching fail. Knowledge has to be situated among what each individual already has learned. If it doesn't make sense to you, or you can't draw analogy to something that has already been etched in your brain over time... it won't mean much to you. Also, brain cells do regenerate despite the dogma that once they are gone they never come back. They do, but you have to kinda re-learn stuff... you can become a totally different person if you kill off and replace enough of them -- not necessarily a good idea, you might get confused. Then again, what do i know, haha.
  • by TheCouchPotatoFamine ( 628797 ) on Tuesday February 13, 2007 @02:43AM (#17994066)
    For those interested in this field, may i suggest a book, Naturally Intelligent Systems? It's slightly older, but it explains a wide gamut of neural networks without a single equation, and manages to be funny and engaging at the same time. it is one of the three books that changed my life (by it's content and ideas alone - i'm not otherwise into AI). highly recommended: Naturally Intelligent Systems on amazon [amazon.com]
  • by master_p ( 608214 ) on Tuesday February 13, 2007 @06:20AM (#17995106)
    The human brain generates so much rubbish because it does not use mathematical logic, but pattern matching.

    In many cases, mathematical logic can not be used to prove the absolute truth of a proposition; therefore the brain uses pattern matching to 'prove' the 'truth' of a proposition to the degree that is useful for the survival of the entity that carries it.

    Take, for example, the proposition that 'prime numbers are infinite'. We all think they are infinite, but there is no mathematical proof for it yet. When we are asked the question if 'prime numbers are infinite', then our first answer is 'yes'...that's pattern matching at work: since we have found a pretty large number of prime numbers, there must be infinite, just like in other cases (the decimal digits of PI, for example).

  • by TheLink ( 130905 ) on Tuesday February 13, 2007 @08:27AM (#17995726) Journal
    You are assuming that the brain is just one implementation of a computer.

    And even if it is true, if it's only true in the way that "The universe is just one implementation of a computer" then I don't think that teaches us that much about the brain/mind (it will still teach us something of course).

    Don't get me wrong though, I do agree that computer science and information theory are fundamental sciences.

    And I also agree with you that the first AI wouldn't be a model of the brain.

    I'm no neuroscientist or computer scientist but I suspect that the first preliminary step towards an AI would be something that automatically makes models of the external world.
    Then the next step would be for it to predict. e.g. if it sees a ball moving towards a wall, it should model the situation _faster_than_real_time_ and predict that it will hit the wall. This sort of thing is very useful for simple creatures.

    You then work at improving the automatic modelling and prediction stuff. Throw in quantum computers if you want to run "infinite" models in parallel (sounds like that will come in handy eh?) .

    Then the next step would be for it to model and predict _itself_ in addition to everything else. Of course it could have already jumped to that stage by then - it's a natural step if it had attempted to model other creatures at any point. Or perhaps the self modelling comes first[1].

    Lastly, BEFORE we do all of that, we should ask WHY do we want to do that and whether it's such a good idea in the first place.

    We already have plenty of creatures imprisoned at the local pet shop, farms already. If you want to do things faster/better, you could just augment humans or animals.

    We've already got billions of imprisoned conscious chickens in the world. They're not smart enough? Conscious != smart. Before we go create something, I think we should ask what are we trying to solve here. If you think it'll be cruel and terrible to implant chicken brains into robots to do stuff, then maybe we shouldn't be creating conscious creatures, we should be aiming for something else. e.g. augmenting humans so that we can control multiple robots easily, recognize stuff faster etc.

    I suggest that there are already plenty of brains/minds in the world. We're already having difficulty taking care of the new/existing ones, managing them, putting them to good use etc.

    [1] Then again, perhaps the "God modelling" comes first (e.g. YHWH).
  • Paging (Score:3, Interesting)

    by mdsolar ( 1045926 ) on Tuesday February 13, 2007 @08:55AM (#17995874) Homepage Journal
    The article says that the chip will work at 300 teraflops. The human brain might be rated at 100,000 teraflops http://www.setiai.com/archives/000035.html [setiai.com] so there is still quite a lot of speed to make up. However, it seems to me that through state saving (paging) one could simulate the connections between many more that a million neurons using this device. If you virtualize as a cube 3000 deep and track connections between these layers in software then processing over the virtual layers can proceed sequentially. So, it seems as though it won't take all that much more hardware development to get to simulations on the human scale owing to the higher frequency of individual operations.
    --
    Solar, a bright idea http://mdsolar.blogspot.com/2007/01/slashdot-users -selling-solar.html [blogspot.com]

Say "twenty-three-skiddoo" to logout.

Working...