Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Biotech Science

Nerve Cells Successfully Grown on Silicon 284

crabpeople writes "Researchers at the University of Calgary have found that nerve cells grown on a microchip can learn and memorize information which can be communicated to the brain. 'We discovered that when we used the chip to stimulate the neurons, their synaptic strength was enhanced,' said Naweed Syed, a neurobiologist at the University of Calgary's faculty of medicine."
This discussion has been archived. No new comments can be posted.

Nerve Cells Successfully Grown on Silicon

Comments Filter:
  • by nhaze ( 684461 ) on Friday February 20, 2004 @05:28AM (#8337751)
    I thought the Pine Lab at Caltech had done this several years ago. Neurochip Project [gatech.edu]
  • by kinnell ( 607819 ) on Friday February 20, 2004 @05:40AM (#8337801)
    No. You're right, growing neurons on silicon is nothing new, but the breakthrough here is that they have been able to stimulate the neurons into forming new connections, rather than just measuring the response of existing networks.
  • by Zoolander ( 590897 ) on Friday February 20, 2004 @05:49AM (#8337835)
    Well, nature has had a tiny bit of more time to do her stuff than we have...
  • Re:I'm no Bill Joy (Score:1, Informative)

    by Anonymous Coward on Friday February 20, 2004 @05:54AM (#8337849)
    Yeah, that's what you think.. go read the battle angel alita series sometime, it has a great piece on the whole 'enhancing humans with chips' thing.

    -- vranash
  • Re:Other uses? (Score:2, Informative)

    by Anonymous Coward on Friday February 20, 2004 @05:55AM (#8337851)
    Anybody have link to a more verbose article?


    [aip.org]
    link to article published in Physical Review Letters
  • by NeuroKoan ( 12458 ) on Friday February 20, 2004 @05:55AM (#8337854) Homepage Journal
    Quote from the above link
    This particular chip has no electrodes. The grillwork design allows the neurons to grow, and contains them indefinitely. We are currently building full chips with this design, and with electrodes.

    Keep an eye out for this page. Once we get fully functional chips, it shouldn't be long before I can show some real experiments and data.


    I think the big news is that electrodes were on the silicon chip, and were actually able to "learn and memorize information which can be communicated to the brain" (as per the original article).

    Also, the page looks like it hasn't been updated since 1995. I wonder what happened to this project. From the page Maher and Thorne seemed so close to what has just been acheived in Canada.
  • by Illserve ( 56215 ) on Friday February 20, 2004 @06:45AM (#8338000)
    This is certainly not the future of computing!

    The precise properties of individual Neurons are unpredictable and highly variable. Worse, they require constant life support just to stay alive. A 5 minute power interruption to your neural CPU and it's time to go shopping for a new one. You would certainly not want to build a practical computing tool out of them.

    Neural computing will remain the domain of highly specialized research into AI and neural computing forever. We may develop neural analogs using nanotech or some other gee-whiz tech, but they will not be true neurons.
  • by covenant_uk ( 752047 ) on Friday February 20, 2004 @06:57AM (#8338028)
    Pubmed link [nih.gov] to the abstract for their research. Publisher's site sometimes holds a free copy of the full paper (depends on the journal).
  • by Anonymous Coward on Friday February 20, 2004 @07:31AM (#8338104)
    It's based on a Dutch movie called "De Lift"

    http://www.imdb.com/title/tt0087622/ [imdb.com]

  • by G4from128k ( 686170 ) on Friday February 20, 2004 @07:32AM (#8338105)
    Neurons are much larger than transistors, but the two aren't really comparable. The main body of a neuron is usually around 25 microns (25000 nm) in diameter and runs at a clockspeed only in the kilohertz max.

    A neuron is much more than a transistor-like switch. On the one side of the neuron's central body is a set of dendrites that connect to and gather input from other neurons. The average neuron might have a thousand of these dendrites.

    The synapse at the end of each dendrite acts like part of a multiply-accumulate term -- taking the signal from an other neuron, multiplying it by a numerical coefficient and summing it into the total excitation level of the neuron's body. I suspect that the precision of this multiply -accumulate process is fairly low -- perhaps 8 to 16 bits.

    Next, the body of the neuron has a long axon extending from it that sends the output of the neuron to other neurons (connecting to the dendrites of other neurons). This axon can be quite long, millimeters, even inches, in length. Thus, the axon is like an off-chip line driver with the potential to have a very high fanout (of a 1000 or more). (On a modern microchip, these off-chip connections are driven by much larger transistors than the small 65 nm ones used in computation).

    Third, a neuron is not a static multiply-accumulate system. The coefficients on each synapse change in response to long-term adaptive processes. This process is computationally complex and includes cross-correlation of inputs between synapses and processing of other chemical signals in the brain. Cross-correlation alone could require the equivalent of several kilobytes to several megabyts of RAM. (We won't even get into the adaptive processes that include physical growth and removal of dendrites as this has no easy analog in hardware)

    In summary, a neuron is more than a transistor-like switch. Its a free-running 1000 register multiply-accumulator with an off-chip line driver and a statistical processing engine that updates the coefficients on each of the multiply-accumulate terms. Thus, emulating a single neuron would require hundreds of thouands to millions of transistors.
  • This axon can be quite long, millimeters, even inches, in length.

    Acutally, it can be over a meter in length (spinal cord to calf is one axone). Try that with a transistor

  • Re:I'm no Bill Joy (Score:3, Informative)

    by eric76 ( 679787 ) on Friday February 20, 2004 @07:52AM (#8338191)
    Check out the book:

    Bothe, Samii, Eckmiller, Neurobionics - An Interdisciplinary Approach to Substitute Impaired Functions of the Human Nervous System, Amsterdam : Elsevier, 1993.
  • by G4from128k ( 686170 ) on Friday February 20, 2004 @08:14AM (#8338266)
    Or, for a more software interpretation, it's a function that takes a bunch of boolean parameters and returns a boolean. Anyone who's ever done any programmation or computer architecture should see why you can easily process anything with this.

    Excellent point. You are right about the computational flexibility of neurons. They can represent a wide range of logical functions, although I believe that the single neuron is incapable of doing an XOR.

    But a neuron is more that a Boolean circuit. Although a neuron seems like a two-state device (its either quiesent or its firing), it is more of an N-state analog device in which the pulse-rate encodes a numerical quantity (probably the equivalent of an 8 to 16 bit floating point number). That is why the dendrite field is like a giant numerical multiply-accumulate.
  • by Saint Aardvark ( 159009 ) on Friday February 20, 2004 @10:47AM (#8339169) Homepage Journal
    Weird -- I remember reading an announcement on this subject on Usenet back when I was in university. What's more, I was able to google for the original article [google.ca] from January, 1991:

    Hello. I just wanted to inform the netland that a direct nerve to transistor
    interface is finally operational. The invention was privately announced 1
    month ago, but is now out in the public. It is possible now to grow a nerve
    over a silicon substrate in a way that the nerve has a capacitive connection
    to a FE-Transistor built into the substrate. The signal to noise is good
    enough to resolve the bandwith of a usual neuron. For more information, watch
    out for an article of the university of Heidelberg in an upcoming issue of
    'Nature'.
    Welcome to Cyberspace.

    Henrik Klagges
    Scanning tunnel microscopy group at LMU Munich

    I was extremely impressed at the time, but I never did see anything more about it. Ah, the days of being young and believing everything you read on alt.cypherpunk...

  • Temporal Synchrony (Score:3, Informative)

    by percepto ( 652270 ) on Friday February 20, 2004 @12:16PM (#8340014)
    But a neuron is more that a Boolean circuit. Although a neuron seems like a two-state device (its either quiesent or its firing), it is more of an N-state analog device in which the pulse-rate encodes a numerical quantity (probably the equivalent of an 8 to 16 bit floating point number). That is why the dendrite field is like a giant numerical multiply-accumulate.

    You're right on-- the change in firing rate relative to the baseline firing rate is very important. Also, there is some reason to think (logically [ucla.edu] and biologically [uiuc.edu]) that some ensembles of neurons fire synchronously with each other and asynchronously from other ensembles of neurons. By using synchrony of firing, they gain computational power and allow for variable binding, thus allowing more formally logical computations to happen than just autocorrelation our boolean operations.

    If I have four neurons, and one represents "red," one represents "green," one represents "square" and one represents "circle," then it is very difficult to tell (based on the sustained activity of the neurons alone) whether they are responding to a red circle and a green square or a red square and a green circle. This is called "the binding problem" and, at least in neural networks, can be solved by distributing the firing patterns of the neurons over time. So, "red" and "circle" fire in synch, then rest while "green" and "square" fire in synch and then rest while "red" and "circle"... etc. Notice that you could even have "red" bound with both "circle" and "square" by being active over two epochs, thus allowing for dynamic binding of variables, etc.

    Anyway, the point of all of this is that if we can figure out how some of this temporal synchrony dimension is exploited in the brain, then we should be able to harness that computational power through silicon transistors like the one described in this article and build modules that could replace damaged regions of the brain.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...