Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM Supercomputing Science

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170

An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"
This discussion has been archived. No new comments can be posted.

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain

Comments Filter:
  • by gurps_npc ( 621217 ) on Friday November 21, 2008 @05:59PM (#25851745) Homepage
    Sorry, had to go for the obligatory Terminator reference. Seriously, the organic brain is evolved, not designed. That means by definition it must be self contained . Self contained means it has to have a ton of backup, self-repair, and maintance systems. Simulatneously, being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough. As such, I don't see how duplicating an organic brain is useful. We don't need what it does, but do need what it does not have. OK, the ability to approximate is very usefull, but I think a direct attempt at that would work better than the indirect.
  • by Ethanol-fueled ( 1125189 ) * on Friday November 21, 2008 @06:16PM (#25851979) Homepage Journal
    Making a machine which wants to kill us is not a mistake in itself as there would be much to learn from it.

    Now connecting the same machine up to life support, missile silos, command and control centers? THAT would be the SKYNET moment.
  • Danger! (Score:4, Insightful)

    by jarrowwx ( 775068 ) on Friday November 21, 2008 @06:25PM (#25852095) Homepage
    I see some big issues with this.

    You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.

    Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!

    Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?

    The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!

    Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.
  • by Louis Savain ( 65843 ) on Friday November 21, 2008 @06:30PM (#25852177) Homepage

    This sounds really great but unless they have a comprehensive theory of animal intelligence to work with, this is one more AI project that will likely fail. Sorry. No amount of computing power is going to help. If you had a good theory of intelligence, you would be able to prove its correctness and scalability on a regular desktop computer.

    In my opinion, a truly intelligent mini-brain with no more than a few tens of thousands of neurons would surprise us with its clever abilities. Just hook it up to a small multi-legged robot with a set of sensors and let it learn through trial and error. If you could build a synthetic brain that can learn to be as versatile as a honeybee, you would have traveled close to 99% of the road toward the goal of making one with human-level intelligence.

  • by evanbd ( 210358 ) on Friday November 21, 2008 @06:59PM (#25852579)

    You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

    It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited time in which to read the program (let alone think about it), so there is an upper bound to the size program we can examine. In practice, it's even worse than that -- there are turing machines started on an empty tape for which we don't know the answer with only 2 symbols and 5 states (see here [wikipedia.org] for refs).

  • by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Friday November 21, 2008 @11:06PM (#25854849) Journal

    True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.

    I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.

    We do not use a small percentage of our brains. I don't have the foggiest idea why this stupid myth perpetuates at all.

  • by Anonymous Coward on Saturday November 22, 2008 @12:53AM (#25855439)

    That program overflows int long before a proof of the Riemann hypothesis would have a chance to get off the ground.

  • Too high level... (Score:2, Insightful)

    by Ubahs ( 1350461 ) on Saturday November 22, 2008 @03:05AM (#25855959)
    I'll shut up if this IBM project, or any project, can make something that can take: 2 webcams, 2 microphones, 1 speaker, 1000 pressure sensors...take all those raw inputs muxed together, and give me the ability to see video, hear in stereo, play a sound, and report on just a few pressure sensors. All this without anyone programming in specifics on each input. All via pattern recognition, genetic algorithms are fine, too. No human help beyond that... Oh, and quickly.

    That's one of the theories about how a brain figures out the senses. Or, to describe it differently, a super-sense or a huge jumble of 'static' that the brain parses and learns at the very beginning of its existence. Are we anywhere near that on the pattern recognition front? This is also dynamic...a brain can pick up a new 'sense' well after birth and incorporate it very well.

    What about a network that can take a billion parallel processes and tie that into a network of a billion serial processes in an organized and dynamic way? True self-reflection on each system, oh that's dynamic too.

    If the above is done, we have a few small parts of one of the simplest brains emulated.

    One of the problems with trying to simulate a brain is that we're using a 'high-level' process to try and simulate something that is much lower level.

    A brain's parts are not inherently logical, just like an electron is not. It builds up to make logical components that work in a particular situation. We're starting with pure logic to try to simulate natural world phenomenon, which, if you've studied evolution, is not based on logic but, 'this worked' - the rule set brains formed under is defined by the laws of matter and time, which we still don't know a lot about. Scientists are constantly surprised how biology is utilizing pathways that make no sense if you apply our idea of logic. It can, however, make sense after the fact - because we're able to believe things that we don't think should be (fairly illogical). How can we program, using logic, illogical and poorly understood systems?

    You can go as low-level as possible and painfully simulate the particles that make up matter and energy and get accurate enough results. You can also go very high level and painfully simulate the cosmos, and get accurate enough results. We are woefully behind on simulating a brain that is many factors of less complexity than that of a cat and get brain-like behavior. An exact programmatic replica of how brain cells work will help us learn a lot about how cells work but, I think the big picture will be lost in there. We will get the cascades and responses, but I don't think we'll get something that resembles behavior, not for a long time. Mainly because we don't have a big-picture, we have no idea how all this works on a large scale. Someone else mentioned the lack of an over-all theory...we need that well before we start trying to make a software brain.

    Sorry, I ramble. I'm very passionate about this stuff, so much so that I dumped a 10-year career and started studying it. ...when I mention logic, I'm mainly talking about computer logic (AND, OR, etc.)

One possible reason that things aren't going according to plan is that there never was a plan in the first place.

Working...