Nerve Cells Successfully Grown on Silicon 284
crabpeople writes "Researchers at the University of Calgary have found that nerve cells grown on a microchip can learn and memorize information which can be communicated to the brain. 'We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced,' said Naweed Syed, a neurobiologist at the University of Calgary's faculty of medicine."
Hasn't this been done before? (Score:5, Informative)
Re:Hasn't this been done before? (Score:5, Informative)
Re:human computers or cybronic humans ? (Score:2, Informative)
Re:I'm no Bill Joy (Score:1, Informative)
-- vranash
Re:Other uses? (Score:2, Informative)
[aip.org]
link to article published in Physical Review Letters
Re:Hasn't this been done before? (Score:5, Informative)
This particular chip has no electrodes. The grillwork design allows the neurons to grow, and contains them indefinitely. We are currently building full chips with this design, and with electrodes.
Keep an eye out for this page. Once we get fully functional chips, it shouldn't be long before I can show some real experiments and data.
I think the big news is that electrodes were on the silicon chip, and were actually able to "learn and memorize information which can be communicated to the brain" (as per the original article).
Also, the page looks like it hasn't been updated since 1995. I wonder what happened to this project. From the page Maher and Thorne seemed so close to what has just been acheived in Canada.
Re:The Future of Computing (Score:5, Informative)
The precise properties of individual Neurons are unpredictable and highly variable. Worse, they require constant life support just to stay alive. A 5 minute power interruption to your neural CPU and it's time to go shopping for a new one. You would certainly not want to build a practical computing tool out of them.
Neural computing will remain the domain of highly specialized research into AI and neural computing forever. We may develop neural analogs using nanotech or some other gee-whiz tech, but they will not be true neurons.
PubMed link to academic ref (Score:3, Informative)
Re:...finally... it all makes sense (Score:2, Informative)
http://www.imdb.com/title/tt0087622/ [imdb.com]
Re:Kinda cool: Neurons vs. Transistors (Score:5, Informative)
A neuron is much more than a transistor-like switch. On the one side of the neuron's central body is a set of dendrites that connect to and gather input from other neurons. The average neuron might have a thousand of these dendrites.
The synapse at the end of each dendrite acts like part of a multiply-accumulate term -- taking the signal from an other neuron, multiplying it by a numerical coefficient and summing it into the total excitation level of the neuron's body. I suspect that the precision of this multiply -accumulate process is fairly low -- perhaps 8 to 16 bits.
Next, the body of the neuron has a long axon extending from it that sends the output of the neuron to other neurons (connecting to the dendrites of other neurons). This axon can be quite long, millimeters, even inches, in length. Thus, the axon is like an off-chip line driver with the potential to have a very high fanout (of a 1000 or more). (On a modern microchip, these off-chip connections are driven by much larger transistors than the small 65 nm ones used in computation).
Third, a neuron is not a static multiply-accumulate system. The coefficients on each synapse change in response to long-term adaptive processes. This process is computationally complex and includes cross-correlation of inputs between synapses and processing of other chemical signals in the brain. Cross-correlation alone could require the equivalent of several kilobytes to several megabyts of RAM. (We won't even get into the adaptive processes that include physical growth and removal of dendrites as this has no easy analog in hardware)
In summary, a neuron is more than a transistor-like switch. Its a free-running 1000 register multiply-accumulator with an off-chip line driver and a statistical processing engine that updates the coefficients on each of the multiply-accumulate terms. Thus, emulating a single neuron would require hundreds of thouands to millions of transistors.
Re:Kinda cool: Neurons vs. Transistors (Score:5, Informative)
This axon can be quite long, millimeters, even inches, in length.
Acutally, it can be over a meter in length (spinal cord to calf is one axone). Try that with a transistor
Re:I'm no Bill Joy (Score:3, Informative)
Bothe, Samii, Eckmiller, Neurobionics - An Interdisciplinary Approach to Substitute Impaired Functions of the Human Nervous System, Amsterdam : Elsevier, 1993.
Re:Software version (more than Boolean) (Score:5, Informative)
Excellent point. You are right about the computational flexibility of neurons. They can represent a wide range of logical functions, although I believe that the single neuron is incapable of doing an XOR.
But a neuron is more that a Boolean circuit. Although a neuron seems like a two-state device (its either quiesent or its firing), it is more of an N-state analog device in which the pulse-rate encodes a numerical quantity (probably the equivalent of an 8 to 16 bit floating point number). That is why the dendrite field is like a giant numerical multiply-accumulate.
Re:Hasn't this been done before? (Score:3, Informative)
I was extremely impressed at the time, but I never did see anything more about it. Ah, the days of being young and believing everything you read on alt.cypherpunk...
Temporal Synchrony (Score:3, Informative)
You're right on-- the change in firing rate relative to the baseline firing rate is very important. Also, there is some reason to think (logically [ucla.edu] and biologically [uiuc.edu]) that some ensembles of neurons fire synchronously with each other and asynchronously from other ensembles of neurons. By using synchrony of firing, they gain computational power and allow for variable binding, thus allowing more formally logical computations to happen than just autocorrelation our boolean operations.
If I have four neurons, and one represents "red," one represents "green," one represents "square" and one represents "circle," then it is very difficult to tell (based on the sustained activity of the neurons alone) whether they are responding to a red circle and a green square or a red square and a green circle. This is called "the binding problem" and, at least in neural networks, can be solved by distributing the firing patterns of the neurons over time. So, "red" and "circle" fire in synch, then rest while "green" and "square" fire in synch and then rest while "red" and "circle"... etc. Notice that you could even have "red" bound with both "circle" and "square" by being active over two epochs, thus allowing for dynamic binding of variables, etc.
Anyway, the point of all of this is that if we can figure out how some of this temporal synchrony dimension is exploited in the brain, then we should be able to harness that computational power through silicon transistors like the one described in this article and build modules that could replace damaged regions of the brain.