Researchers Simulate Building Block of Rat's Brain 224
slick_shoes passes on an article in the Guardian about the Blue Brain project in Switzerland that has developed a computer simulation of the neocortical column — the basic building block of the neocortex, the higher functioning part of our brains — of a two-week-old rat. (Here is the project site.) The model, running on an IBM Blue Gene/L supercomputer, simulates 10,000 neurons and all their interconnections. It behaves exactly like its biological counterpart. Thousands of such NCCs make up a rat's neocortex, and millions a human's. "Project director Henry Markram believes that with the state of technology today, it is possible to build an entire rat's neocortex. From there, it's cats, then monkeys and finally, a human brain."
Re:At what point... (Score:5, Interesting)
When computer intelligence can give a convincing argument for doing so.
Comment removed (Score:2, Interesting)
Re:Neocortex too complex (Score:3, Interesting)
That is technically impossible, considering the behavior of the mammalian brain is not well understood at any level.
You're missing the point. The entire purpose of this project is to increase our understanding of how the brain works
This is where it starts (Score:2, Interesting)
Imagine simulating a human brain, but then incorporating an interface with software that enhances its functionality - from super-fast arithmetic to image output - the results would be incredible.
Re:Neocortex too complex (Score:3, Interesting)
I think I know what the OP is asking:
How can we be sure we have the right answer when we don't have the reference model fixed yet? Using yet another oh-so-fun car analogy:
Kinda hard to duplicate a car without knowing how it works. Sure, you COULD try to build a Ferrari, and sure, it COULD run on a steam engine... It might look the same, but wouldn't function similarly {speed-wise}...
Re:At what point... (Score:3, Interesting)
If we get a computer to behave or think like a rat, should a rat get the same rights of protection that a computer does...
I think its important to keep in mind that humans (and any other organic life) are a mind and a body, its a deep philosophical question to consider if a brain can be a mind without a body, and it is the human mind that we value, not just the brain, hardware is useless without software.
I think it would be more useful to talk about human behavior models rather than artificial agents (or artificial intelligence)
Re:Neocortex too complex (Score:3, Interesting)
In practice, neurons seem to be a lot more complicated than that. Certainly, the inputs are variable state, the wiring is known to change over time (even in the adult brain, the wiring is dynamic), but if I'm understanding the current work correctly, then there are potentially multiple independent outputs, that triggering one output will not necessarily trigger any other output.
Ok, you can simulate multi-state logic with binary logic - well, with enough binary logic - and you can simulate N independent outputs with N independent single-output neurons. This would mean you could simulate, say, 10,000 biological neurons with, oh, 16 independent outputs on average and where you have 16 independent inputs where each has 16 potential states, with 40,960,000 binary computer-simulated neurons. First, the current knowledge on neuron I/O probably post-dates the analysis in this study, invalidating the conclusions. Second, if it didn't and the study incorporated the knowledge, there is simply no way they could have simulated enough neurons to produce the results they claim.
My conclusion is that the study would have to be evaluated in light of what is known now, not what was known at the time this study was conducted, by experts in neurological science and computer science, to determine if what the study is thought to show is what it actually shows. My belief is that it probably does not, but there's a reason peer review doesn't include the opinions of bloggers. Peer review is only valid when it is conducted by knowledgeable people in the field who have full access to potentially contrary knowledge and who are willing to use that knowledge to challenge and test a paper to its limits. Nonetheless, anyone can spot potential flaws.
Re:The Intelligence Game (Score:2, Interesting)
I am and I have. I have been working on just such a project [rebelscience.org] for years on my own time and my own dime. Trying to use computers to simulate neurons in all their biological glory is a pipe dream. We know how several types of neurons work on a higher and simpler level: they send and receive spikes via synapses. That's the only level that needs to be simulated to achieve intelligence. The brain is a discrete temporal mechanism that uses multiple integrated networks to learn and adapt. I'm sure Markram et al are aware of this but being biologists, they can't seem to move beyond the low-level complexities.
What we need first is an overall theory to play with, not supercomputers. Once we have a theory in place, we'll have a model to experiment with (even on a small scale), something that can evolve over time. It does not have to do much (learning to walk and navigate from scratch would do fine), as long as it it can learn and adapt and it is provably scalable. If you can show that, governments and corporations will step all over themselves to give you a parallel computer as big as the island of Manhattan, if necessary.
Free Will (Score:2, Interesting)
This could turn out to be a way to figure out some of the great blockbuster philosophical problems that puzzle and infuriate anybody who has not read Oolon Colluphid.
If the scientists built an entire human brain they will presumably fail to install such things as Free Will - A concept which philosophers still argue is logically possible.
Will this prove that Free Will does not exist?
Or will it simply be impossible to detect?
For a sort of example of this remember William Gibson's consideration of this in his book "Neuromancer".
In that passage the mind of the hacker Case has been trapped inside a massive artificial intelligence where he improbably finds his lost girlfriend and a young boy, the manifestation of the AI known as Neuromancer on a beach.
"And here things could be counted, each one. He knew the
number of grains of sand in the construct of the beach (a number
coded in a mathematical system that existed nowhere outside
the mind that was Neuromancer). He knew the number of
yellow food packets in the canisters in the bunker (four hundred
and seven). He knew the number of brass teeth in the left half
of the open zipper of the salt-crusted leather jacket that Linda
Lee wore as she trudged along the sunset beach, swinging a
stick of driftwood in her hand (two hundred and two).
He banked Kuang above the beach and swung the program
in a wide circle, seeing the black shark thing through her eyes,
a silent ghost hungry against the banks of lowering cloud. She
cringed, dropping her stick, and ran. He knew the rate of her
pulse, the length of her stride in measurements that would have
satisfied the most exacting standards of geophysics.
"But you do not know her thoughts," the boy said, beside
him now in the shark thing's heart. "I do not know her thoughts.
You were wrong, Case. To live here is to live. There is no
difference."
Linda in her panic, plunging blind through the surf.
"Stop her," he said, "she'll hurt herself."
"I can't stop her," the boy said, his gray eyes mild and
beautiful."
Re:Blue/Gene L? (Score:2, Interesting)
Couple that cost reduction with power consumption orders of magnitude lower than other solutions, and you've got some serious potential.