DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170
An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"
Yes, but can it beat the turk at chess? (Score:5, Insightful)
Re:And then it becomes self-aware (Score:3, Insightful)
Now connecting the same machine up to life support, missile silos, command and control centers? THAT would be the SKYNET moment.
Danger! (Score:4, Insightful)
You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.
Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!
Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?
The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!
Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.
They Need a Good Intelligence Theory First (Score:1, Insightful)
This sounds really great but unless they have a comprehensive theory of animal intelligence to work with, this is one more AI project that will likely fail. Sorry. No amount of computing power is going to help. If you had a good theory of intelligence, you would be able to prove its correctness and scalability on a regular desktop computer.
In my opinion, a truly intelligent mini-brain with no more than a few tens of thousands of neurons would surprise us with its clever abilities. Just hook it up to a small multi-legged robot with a set of sensors and let it learn through trial and error. If you could build a synthetic brain that can learn to be as versatile as a honeybee, you would have traveled close to 99% of the road toward the goal of making one with human-level intelligence.
Re:Thought question.. (Score:3, Insightful)
You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).
It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited time in which to read the program (let alone think about it), so there is an upper bound to the size program we can examine. In practice, it's even worse than that -- there are turing machines started on an empty tape for which we don't know the answer with only 2 symbols and 5 states (see here [wikipedia.org] for refs).
Re:And then it becomes self-aware (Score:3, Insightful)
True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.
I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.
We do not use a small percentage of our brains. I don't have the foggiest idea why this stupid myth perpetuates at all.
Re:Does this program terminate, smartypants? (Score:1, Insightful)
That program overflows int long before a proof of the Riemann hypothesis would have a chance to get off the ground.
Too high level... (Score:2, Insightful)
That's one of the theories about how a brain figures out the senses. Or, to describe it differently, a super-sense or a huge jumble of 'static' that the brain parses and learns at the very beginning of its existence. Are we anywhere near that on the pattern recognition front? This is also dynamic...a brain can pick up a new 'sense' well after birth and incorporate it very well.
What about a network that can take a billion parallel processes and tie that into a network of a billion serial processes in an organized and dynamic way? True self-reflection on each system, oh that's dynamic too.
If the above is done, we have a few small parts of one of the simplest brains emulated.
One of the problems with trying to simulate a brain is that we're using a 'high-level' process to try and simulate something that is much lower level.
A brain's parts are not inherently logical, just like an electron is not. It builds up to make logical components that work in a particular situation. We're starting with pure logic to try to simulate natural world phenomenon, which, if you've studied evolution, is not based on logic but, 'this worked' - the rule set brains formed under is defined by the laws of matter and time, which we still don't know a lot about. Scientists are constantly surprised how biology is utilizing pathways that make no sense if you apply our idea of logic. It can, however, make sense after the fact - because we're able to believe things that we don't think should be (fairly illogical). How can we program, using logic, illogical and poorly understood systems?
You can go as low-level as possible and painfully simulate the particles that make up matter and energy and get accurate enough results. You can also go very high level and painfully simulate the cosmos, and get accurate enough results. We are woefully behind on simulating a brain that is many factors of less complexity than that of a cat and get brain-like behavior. An exact programmatic replica of how brain cells work will help us learn a lot about how cells work but, I think the big picture will be lost in there. We will get the cascades and responses, but I don't think we'll get something that resembles behavior, not for a long time. Mainly because we don't have a big-picture, we have no idea how all this works on a large scale. Someone else mentioned the lack of an over-all theory...we need that well before we start trying to make a software brain.
Sorry, I ramble. I'm very passionate about this stuff, so much so that I dumped a 10-year career and started studying it. ...when I mention logic, I'm mainly talking about computer logic (AND, OR, etc.)