Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science Technology

Building a Silicon Brain 236

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."
This discussion has been archived. No new comments can be posted.

Building a Silicon Brain

Comments Filter:
  • by tehdaemon ( 753808 ) on Tuesday February 13, 2007 @01:48AM (#17993716)
    Are you sure about that? FTA:

    "We can currently do small simulations of hundreds to thousands of neurons, but it would be great to be able to scale that up," he says.

    A 2.0GHz dual-core CPU running 2^20 neurons in the net at 100Hz gets about 40 clock cycles per neuron per cycle...Somebody check my math please.

    T

  • BOINC (Score:4, Insightful)

    by Gary W. Longsine ( 124661 ) on Tuesday February 13, 2007 @02:01AM (#17993804) Homepage Journal
    I would think a BOINC project might produce enough muscle to get a really big brain going. Imagine a BOINC [berkeley.edu] cluster of...

    ;-)
  • by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday February 13, 2007 @02:37AM (#17994036) Homepage Journal
    I, like many other engineers, don't give a shit. We just want to solve problems to which there are no simple solutions and "AI" offers some approaches that work.

    Leave the philosophy till after we have the science.
  • by NixieBunny ( 859050 ) on Tuesday February 13, 2007 @02:38AM (#17994042) Homepage
    An interesting aspect of the brain is that it may be possible to build circuitry that mimics its behavior without understanding that behavior. There are many complex systems (collections of simple parts) that exhibit surprisingly coherent behavior that you just wouldn't expect. Swarms of locusts are one example. The insect robots that learn how to walk every time you turn on their power is another example.
  • by TheCouchPotatoFamine ( 628797 ) on Tuesday February 13, 2007 @03:13AM (#17994222)
    The simple reason is that software cannot compute every iteration in parallel. Imagine light beams for instance - if you were to "sum" the intensity of several beams at a single photodiode, it would occur simultaneously as a single operation. Software requires, regardless of the number of possbile processor within reasonable (read: current technological) limits, an iterative approach such that during every stage of calucualtion, each neurode (neuron+node=neurode) has to be caluclated in order - drastically, drastically slowing the network down. Since eletrical currents can be summed simultaneously (in an analog circuit, which this undoubtably is at some stage) it allows for the same type of "instantaneous" calculation that your brain currently enjoys. that's why it's so important to do it in hardware, and why optical techniques, and not electrical, ultimately hold far more promise. It's all in the book i recommended a post or two above...
  • by wall0159 ( 881759 ) on Tuesday February 13, 2007 @03:26AM (#17994272)

    The calculations involve adjusting the weight of connections between neurons, which generally scale exponentially with the number of neurons. This is because each neuron typically has connections to many other neurons.

    So, your math might be right, but your assumptions are wrong. :-)
  • by zestyping ( 928433 ) on Tuesday February 13, 2007 @03:37AM (#17994328) Homepage
    Quantum physics can be mathematically modelled, just as Newtonian physics can. It may be counterintuitive, but it's not magic.
  • by Venik ( 915777 ) on Tuesday February 13, 2007 @04:07AM (#17994472)
    How can you build a software model of a process you don't understand? The best hope is to build a hardware approximation of a human brain and hope that, somehow, the same processes start occurring, quantum or otherwise. And if that doesn't work, then you'll have to do some real science.
  • by AndOne ( 815855 ) on Tuesday February 13, 2007 @04:34AM (#17994608)
    Having been a fan of neuromorphic engineering for several years now(Note I'm not an active researcher but I pretend somedays :) ) one of the major advantages of neuromorphic functionality isn't necessarily it's ability to model biological systems but the fact that the devices are extremely low power. When modeling neurons in silicon(at least back in the day of Carver Mead's work and for cochlea and retina stuff and I'm doubting it's changed too bunch but I could be wrong) the transistors would run in sub threshhold mode(basically leakage currents so OFF) since the power curves modeled the expected neuro response curves. One of Boahen's stated goals(at least on his website when he was at Penn) was to reduce power consumption and improve processing power for problem solving via these techniques. His lab has been in Scientific America a couple times in the last few years for work in accurately modeling Neuronal spiking in hardware too. I have them but not at hand so I can't cite them at the moment but they were fun reads.

    So in summary, it's more than just modeling the brain. It's about letting biology inspire us to make better and more efficient computing systems.
  • by joto ( 134244 ) on Tuesday February 13, 2007 @06:52AM (#17995264)
    I'm sorry, I don't have 4.54 billion years to spend on something that might produce similar results. Besides, there's no guarantee we would understand that either. Could you suggest something faster and/or better/more predictable?
  • by TheLink ( 130905 ) on Tuesday February 13, 2007 @06:58AM (#17995288) Journal
    Uh if you are going to resort to getting an intelligent creature without understanding how it was done, you might as well get one from a petshop.

    Sure animals have disadvantages but how sure are you that the AI you get after doing that "evolve it" thing won't have similar disadvantages too?
  • by fyngyrz ( 762201 ) * on Tuesday February 13, 2007 @07:09AM (#17995340) Homepage Journal
    Wasn't science invented to aid in philosophy ?

    Not really. Science was invented because philosophy and its bastard child, alchemy, wasn't working. Philosophy still doesn't work very well, despite the additional time it has had since Sir Francis Bacon whacked science into shape. It's a playground for abstractions without homes in nature. Sometimes we get a small tidbit or two out of it, but mostly it is simply a domain of divisive and socially-landlocked bickering. In the meantime, look at the computer on your desk. And your wrist, if you have a modern watch. That's the result of science. Not philosophy. Science actually works. That is what makes it - and math, if you like to define math as "other" than science, as many do - stand out above all other pursuits.

  • by vertinox ( 846076 ) on Tuesday February 13, 2007 @10:59AM (#17997008)
    How can you build a software model of a process you don't understand?

    Same way the Wright Brothers built their first aircraft.

    1. Make observations of things that do fly.
    2. Make an approximation of what it takes to fly based off those observations.
    3. Build a model based off that.
    4. See if it works in a trial run.
    5. If it doesn't, back to step one.

    Obviously, the Wright Brothers understood basic aerodynamics, but only at a certain level from observations of test gliders and the semi-wind tunnel setup they had built.

    But the majority of their work was trial and error. They were bicycle engineers after all and didn't really have a professional schooling in their field of heavier than air flight.

    Trial and error is simply one of the better ways humans have at understanding things they don't understand. It is part of the scientific process to rule out things that can or cannot be done.
  • by mooingyak ( 720677 ) on Tuesday February 13, 2007 @11:18AM (#17997252)
    Take, for example, the proposition that 'prime numbers are infinite'. We all think they are infinite, but there is no mathematical proof for it yet.

    There have has been a proof for it for a long time. Gettin' wiki [wikipedia.org] wit it.

    Quoting from the link:

    There are infinitely many prime numbers

    The oldest known proof for the statement that there are infinitely many prime numbers is given by the Greek mathematician Euclid in his Elements (Book IX, Proposition 20). Euclid states the result as "there are more than any given [finite] number of primes", and his proof is essentially the following:

            Suppose you have a finite number of primes. Call this number m. Multiply all m primes together and add one (see Euclid number). The resulting number is not divisible by any of the finite set of primes, because dividing by any of these would give a remainder of one. And one is not divisible by any primes. Therefore it must either be prime itself, or be divisible by some other prime that was not included in the finite set. Either way, there must be at least m + 1 primes. But this argument applies no matter what m is; it applies to m + 1, too. So there are more primes than any given finite number.

    This previous argument explains why the product of m primes plus 1 must be divisible by some prime not among the m primes, or be prime itself. A common mistake is thinking Euclid's proof says the prime product plus 1 is always prime.
    2 3 5 7 11 13 + 1 = 30,031 = 59 509 (both primes) shows this is not the case.
  • by Rei ( 128717 ) on Tuesday February 13, 2007 @01:49PM (#17999702) Homepage
    That's not really true. There are many, many different approaches out there. None have hit that "sweet spot" that we're looking for. Some abstract away from the brain heavily using mathematical representations in order to get more performance, effectively simulating more neurons. Others go with a more physical representation, but consequently accept poorer performance. Obviously, there's a good bit of variation on the mathematical models, but there also are variations on the physical models -- what do you try to model, how much in-depth you try and model it, etc.

    A model simply cannot be perfect; sure, we have physics models that can deal with subatomic particles and quantum effects, but even if we knew (from anatomical studies) *exactly* what to put where in what cells, the concept of that much CPU time for such a simulation ever being available is just ludicrous. Any computer models of the brain *must* be simplified. The question is: what do we simplify, and how? In other words, what are the tradeoffs in terms of loss (if any) of cognitive ability vs. the gains from faster performance? What is the main reason why our models haven't performed as well as we would like? Are we short on CPU? Are we missing knowledge of some anatomical details? Are there anatomic details that we know about but aren't modelling well enough?

    And then you get into the more basic questions, like "how do we develop our network?" Random interlinks? Geometrically correlated interlinks? Evolved interlinks? Do we try and do both backpropagation *and* evolutionary adaptation, suffering the fact that by splitting up our CPU time between the two tasks, we're doing neither as well as we could have? What sort of constraints do we use on the evolutionary process? What sort of training (and how much) do we do?

    These are the sort of things we need to solve. Just an example: one net that I read about a few years ago used for audio recognition performed exceedingly well -- many orders of magnitude better than its predecessors. What did they change? They simply added varying delay times for signal propagation to their model. That's it. Undoubtedly, it will be these kinds realizations, discovered through trial and error, that will lead us to as intelligent of computer models of neurons as are possible.

All the simple programs have been written.

Working...