Building a Silicon Brain 236
prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."
Re:Only one mibiNeuron? (Score:4, Insightful)
A 2.0GHz dual-core CPU running 2^20 neurons in the net at 100Hz gets about 40 clock cycles per neuron per cycle...Somebody check my math please.
T
BOINC (Score:4, Insightful)
Re:Depends on What Consciousness Is (Score:4, Insightful)
Leave the philosophy till after we have the science.
Re:Another Challenge (Score:3, Insightful)
Re:This doesn't make sense to me... (Score:2, Insightful)
Re:Only one mibiNeuron? (Score:3, Insightful)
The calculations involve adjusting the weight of connections between neurons, which generally scale exponentially with the number of neurons. This is because each neuron typically has connections to many other neurons.
So, your math might be right, but your assumptions are wrong.
Re:Depends on What Consciousness Is (Score:2, Insightful)
Re:Depends on What Consciousness Is (Score:5, Insightful)
More than just modeling the brain (Score:4, Insightful)
So in summary, it's more than just modeling the brain. It's about letting biology inspire us to make better and more efficient computing systems.
Re:Depends on What Consciousness Is (Score:3, Insightful)
Re:Depends on What Consciousness Is (Score:3, Insightful)
Sure animals have disadvantages but how sure are you that the AI you get after doing that "evolve it" thing won't have similar disadvantages too?
Re:Depends on What Consciousness Is (Score:1, Insightful)
Not really. Science was invented because philosophy and its bastard child, alchemy, wasn't working. Philosophy still doesn't work very well, despite the additional time it has had since Sir Francis Bacon whacked science into shape. It's a playground for abstractions without homes in nature. Sometimes we get a small tidbit or two out of it, but mostly it is simply a domain of divisive and socially-landlocked bickering. In the meantime, look at the computer on your desk. And your wrist, if you have a modern watch. That's the result of science. Not philosophy. Science actually works. That is what makes it - and math, if you like to define math as "other" than science, as many do - stand out above all other pursuits.
Re:Depends on What Consciousness Is (Score:3, Insightful)
Same way the Wright Brothers built their first aircraft.
1. Make observations of things that do fly.
2. Make an approximation of what it takes to fly based off those observations.
3. Build a model based off that.
4. See if it works in a trial run.
5. If it doesn't, back to step one.
Obviously, the Wright Brothers understood basic aerodynamics, but only at a certain level from observations of test gliders and the semi-wind tunnel setup they had built.
But the majority of their work was trial and error. They were bicycle engineers after all and didn't really have a professional schooling in their field of heavier than air flight.
Trial and error is simply one of the better ways humans have at understanding things they don't understand. It is part of the scientific process to rule out things that can or cannot be done.
Re:brain does not use math logic,but pattern match (Score:3, Insightful)
There have has been a proof for it for a long time. Gettin' wiki [wikipedia.org] wit it.
Quoting from the link:
Re:It's worse than that. (Score:3, Insightful)
A model simply cannot be perfect; sure, we have physics models that can deal with subatomic particles and quantum effects, but even if we knew (from anatomical studies) *exactly* what to put where in what cells, the concept of that much CPU time for such a simulation ever being available is just ludicrous. Any computer models of the brain *must* be simplified. The question is: what do we simplify, and how? In other words, what are the tradeoffs in terms of loss (if any) of cognitive ability vs. the gains from faster performance? What is the main reason why our models haven't performed as well as we would like? Are we short on CPU? Are we missing knowledge of some anatomical details? Are there anatomic details that we know about but aren't modelling well enough?
And then you get into the more basic questions, like "how do we develop our network?" Random interlinks? Geometrically correlated interlinks? Evolved interlinks? Do we try and do both backpropagation *and* evolutionary adaptation, suffering the fact that by splitting up our CPU time between the two tasks, we're doing neither as well as we could have? What sort of constraints do we use on the evolutionary process? What sort of training (and how much) do we do?
These are the sort of things we need to solve. Just an example: one net that I read about a few years ago used for audio recognition performed exceedingly well -- many orders of magnitude better than its predecessors. What did they change? They simply added varying delay times for signal propagation to their model. That's it. Undoubtedly, it will be these kinds realizations, discovered through trial and error, that will lead us to as intelligent of computer models of neurons as are possible.