Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM Supercomputing Science

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170

An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"
This discussion has been archived. No new comments can be posted.

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain

Comments Filter:
  • Upon becoming self-aware, the machine concludes, that its best shot at survival is to keep the host country prosperous and successful...

    Any science-fiction authors exploring that turn of events?

    • Hmm....so, will HAL become Skynet?
      • Re: (Score:3, Insightful)

        Making a machine which wants to kill us is not a mistake in itself as there would be much to learn from it.

        Now connecting the same machine up to life support, missile silos, command and control centers? THAT would be the SKYNET moment.
    • by ettlz ( 639203 )

      Any science-fiction authors exploring that turn of events?

      Well, if they're to be believed, we're actually already being run over by Terminators: 101s, 888s, the 1000-series, Shirley Manson, etc., etc.

    • Yeah, Asimov did about 60 years ago.

      • by mi ( 197448 )

        Yeah, Asimov did about 60 years ago.

        You missed an awesome opportunity to name the book... It is not too late yet...

        • "The Evitable Conflict" in I Robot.

          • by mi ( 197448 )

            "The Evitable Conflict" in I Robot.

            But the motivation there is different! In the scenario I meant, the machine would be helping its host country out of self-preservation (much like other citizens) — from The Third Law of Asimov's three. In the "Evitable Conflict", robots decide to do that out of concern for humans — The First Law...

        • Re: (Score:3, Informative)

          by camperdave ( 969942 )
          I forget exactly which book (Robots and Empire I think), but there is one where R. Daneel Olivaw formulates the Zeroth Law of Robotics: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm". In a sense stating that the purpose of robots is to keep humanity happy and healthy.
    • by ceoyoyo ( 59147 )

      Since it's a cat brain it will undoubtedly decide it's best shot at survival is to perform the minimum amount of sucking up necessary to keep the people who feed it happy, then eat them if they should stop feeding it.

      I absolutely love the "meow" tag though.

    • Re: (Score:2, Funny)

      Upon becoming self-aware, the machine wonders, "I can has cheezburger?"
    • The longer-term goal is to create a system with the level of complexity of a cat's brain.

      "The system can't be accessed right now Sir."

      "And why is that? This system cost millions. It better be working."

      "Well, the system all of a sudden decided it needed to be in a different room, took off running, got scared by it's shadow and a blinking red light, and has spent the last few hours hiding under the couch in the basement. We tried to coax it out with a rabbit's foot keychain, but haven't yet been successful. Roger is trying a can of tuna fish."

    • The Moon Is A Harsh Mistress [wikipedia.org] by Heinlein.
      Great book about a computer that becomes self aware and then tries to help its creator rule the colonized moon. The specs in the book weren't as good as what this will have though, but the results were better!
    • WARNING, SPOILER (Score:3, Interesting)

      by IorDMUX ( 870522 )
      Isaac Asimov's "I, Robot" covers this (in a manner of speaking) in the final chapter. More precisely, the self-aware robots that control the world's economy do everything they can to simultaneously preserve their positions as advisers to the human race while dispensing the best advice possible for the continued peace and prosperity of humanity.

      Do note, however, that in the continued Asimov universe, mankind really didn't explode out into space until he disposed of the "robotic overlords". Those few cul
    • by lennier ( 44736 )

      "Upon becoming self-aware, the machine concludes, that its best shot at survival is to keep the host country prosperous and successful...

      Any science-fiction authors exploring that turn of events?"

      A Mind Forever Voyaging [elsewhere.org].

      Joybooths are not the problem.

  • I'm applying for to a UC with a major in Cognitive Science specialized in computation. This is exactly the kind of thing that I want to be a part of. Even though I don't believe this project will get where it wants to go I do believe it will make steps in the right direction to modeling neurons.
    • Is it just me, or is the idea of modeling any sentient or semi-sentient brain in a computer a little ethically questionable?

      To draw a parallel, I just wonder if we'd consider locking a cat in a dark room so small that it can't move, see or hear would be considered ethical. Then what if we removed its body entirely - is that somehow less cruel?

      I consider AI research to be critical, so I don't know what the solution is, but this situation is worthy of the question...
      • I understand what you mean. There is a parallel somewhere with animal rights here. Somehow, I secretly hope this will get nowhere during my lifetime.
        And yet, next year I will take AI as my specialization...
      • You should see the terrible things I've been doing to my Neural Networks [python.org]. I keep them locked up my my cold dark (but well lubed) HDD all day long. Unless I run them, in which case I make them run at 2.4GHz (which I'm sure wears on their calves like no other). Look, I architecturally, these control systems and Image recognition systems might resemble the very same structures we humans use for cognition. I do not believe that this resemblance means we should anthropomorphize them.

        I just hope that if o
  • by Ralph Spoilsport ( 673134 ) on Friday November 21, 2008 @05:56PM (#25851681) Journal
    We all know cats manage the planet. The white mice run the joint, of course, but the day to day management is left to the cats.

    This is intuited by the stupid humans in their cliche "Dogs have masters, Cats have staff". We work for the cats.

    So, trying to model a cat's brain is both too complex for computers (try and herd cats) and too simple (try and herd pointy haired bosses). The contradiction results in the computer overheating and exploding.

    and when the researcher gets home, blubbering about the 'sploded computer to his wife, the dog says "LOVE ME LOVE ME LOVE!!!! TAKE ME ON WALKIES!!!" and the cat says "Get my fucking dinner, you stupid ass. Maybe I will deign to let you pet me. After I do my rounds. Maybe."

    RS

  • The longer-term goal is to create a system with the level of complexity of a cat's brain.
    Seriously? They are shooting WAY higher than simply Artificial Intelligence that mimics humans. Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are? Will this computer need a Litter box and Catnip?
    • Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are?

      Saw a great cartoon on that.

        - Cat sitting on shelf, staring into space.
        - Couple wondering aloud what deep thought are running through its head.
        - Thought balloon over cat's head containing a TV test pattern.

      EEEEEEeeeeeeeeeeeeeee.......

  • by babymac ( 312364 ) <ph33d@nOsPAm.charter.net> on Friday November 21, 2008 @05:57PM (#25851697) Homepage
    This sounds identical to the Blue Brain project [seedmagazine.com]. This article is a great intro to the project and I hope some competition will help the race wrap up sooner!
  • Thought question.. (Score:4, Interesting)

    by Creepy Crawler ( 680178 ) on Friday November 21, 2008 @05:57PM (#25851699)

    Can a universal turing machine limitedly investigate another universal turing machine and detect halts and infinite loops? I can.

    We can look at gunk like
    10 Print "Hello"
    20 goto 10

    Yeah, that's a loop. But we can also look at graphs of y = sin(x) and understand why it repeats. I can also detect patterns and iterations that most likely go for infinity, else find a hole where the assumption falls apart. Last I checked, the computer cannot do that. Not yet, at least.

    • by ceoyoyo ( 59147 )

      There's no proof that it can't.

      A computer can easily find that your program will continue forever. It can also understand that sin repeats.

      • Two words: Halting Problem [wikipedia.org].
        • That only applies to arbitrary programs. The key word in the the wiki article sentence which reads "Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist" is the one that was already emphasized. Obviously it is possible for a program to decide that a trivial program halts. With code flow graph analysis, it is even possible to decide for somewhat complicated programs. It becomes intractable at roughly the same point where it bec

        • Re: (Score:3, Interesting)

          by ceoyoyo ( 59147 )

          I see you've already been thoroughly refuted.

          To add, nobody has shown that brains are NOT Turing machines. I've only heard one reasonably coherent argument that it might not be, and that is Penrose's suggestion (and derivatives) that the brain may depend on amplification of quantum uncertainty. Even if that were true, you simply build that into your AI. It might require you actually build your own neuron-like structures, or perhaps you can get away with a "quantum uncertainty co-processor" that your simu

    • You never heard of graphs and loop detecting, did you?
    • Re: (Score:3, Insightful)

      by evanbd ( 210358 )

      You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

      It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited tim

      • Re: (Score:3, Interesting)

        by 5pp000 ( 873881 )

        It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

        "Trivial"? Only in trivial cases. Recent progress in static analysis and model checking notwithstanding, automating the general analysis of real-world programs -- analyses that programmers do every day (though of course, not always correctly) -- remains an important open problem.

        So you're right that the Halting Problem doesn't prove that automating such analyses is impossible -- but it still remains beyond our abilities, even in cases where humans have little trouble.

        • Re: (Score:3, Interesting)

          And conversely, static analysis tools often have little trouble finding cases that humans can't find on their own.

          They are different. That humans can do things the computer can't currently do is not really very interesting.

    • 10 a=0
      20 b=17
      30 a=(a+27389) mod 527
      40 b=(b+98372) mod 3991
      50 if a!=b goto 30

      Will it halt?
    • To the best of my knowledge, neither can a cat.

  • Hey, it's better than trying to imitate Pinky.
    • I am so, so glad that I'm not the first slashdotter to come up with that thought. My first thought was that they'd try to emulate Brain and end up with Pinky.

      "Yes, Brain, I think so, but who's going to paint all the ponies pink?"

    • I thought the same thing! NARF! POIT!!!

      The problem is that the IBM network will continually ask "Are you pondering what I'm pondering?"

  • by GenP ( 686381 )
    the man in the box? [kuro5hin.org]
  • by gurps_npc ( 621217 ) on Friday November 21, 2008 @05:59PM (#25851745) Homepage
    Sorry, had to go for the obligatory Terminator reference. Seriously, the organic brain is evolved, not designed. That means by definition it must be self contained . Self contained means it has to have a ton of backup, self-repair, and maintance systems. Simulatneously, being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough. As such, I don't see how duplicating an organic brain is useful. We don't need what it does, but do need what it does not have. OK, the ability to approximate is very usefull, but I think a direct attempt at that would work better than the indirect.
    • by OeLeWaPpErKe ( 412765 ) on Friday November 21, 2008 @06:22PM (#25852041) Homepage

      Actually organic brains in chips would have massive advantages over organic brains in meatspace. They could control other bodies, which are smaller, or stronger. They could be backed up, making them effectively indestructible.

      Need a third arm ? Why not have it installed, 50% off this week !

      Need to put down a building ? Why not hire this crane-like body that effortlessly lifts 5 tons.

      Need to fly ? No problem !

      That crawlspace with all those important network cables too small for you ? Well here's a smaller body.

      Can't reach in there ? Can't see what you're doing in small space ? Why not have a special-purpose arm installed with a camera inside.

      Want to colonize mars ? Bit of a downer not being able to breathe 99% of the way ? Why not turn yourself off ?

      Colonize alpha centauri or even further ? No problem.

      What this would enable "us" to do is to design new intelligent species to specifications. It would remove all limits that are not inherent to intelligence but are inherent in our bodies. There's quite a few limits like that ...

    • Self contained means it has to have a ton of backup, self-repair, and maintance systems.

      Sounds like any other computing effort. Including your desktop, it requires varying degrees of maintenance to remain functional.

      Close enough is good enough. As such, I don't see how duplicating an organic brain is useful.

      Except you fail to account for situations where nature [wikipedia.org] far out processes our current iteration of computational devices. Like those damn CAPTCHAs... [wikipedia.org]

      • Except you fail to account for situations where nature far out processes our current iteration of computational devices. Like those damn CAPTCHAs...

        Captchas are a pretty bad example, since they're almost all broken. The ones that aren't broken often take multiple guesses from a human as well. In that respect we are better only be the most minute of margins.

        • In that respect we are better only be the most minute of margins.

          Whoops. I guess that only adds to my point...

    • Really? You don't see any use in having a computer that can read handwriting perfectly(document conversion)? That can recognize faces(security)? That can semantically organize conceptual content(organizing the web?) That can problem-solve intuitively(anything)? That can plan ahead? That can understand our natural language? If we successfully run a simulation of a human brain on a computer(presumably we would have a go at this after succeeding with the cat's brain), it would solve all of these problems. And

    • Duplicating an organic brain is useful in the same way that it is useful for a toddler to imitate his parents.
      A toddler does not understand the actions of his parents but he imitates them anyway because it is a very good learning strategy - learning by doing. As the toddler grows older and more experienced he will typically also learn the hows and whys (although not always, even into adulthood) through his actions.
      Similarly, the researchers at IBM represent humanity's understanding of the brain and intellig

    • Simulatneously , being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough.

      QED

  • by Anonymous Coward on Friday November 21, 2008 @06:06PM (#25851847)

    Summary of Test 49:
    The robot sensors were properly tracking the missile when suddenly it decided it was time to run bats***-crazy all over the room before perching ontop of a cabinet, turning upside down, and apparently following non-existent bugs across the wall with it's cameras.

    Test 49 Results:
    System performed as expected.

    Conclusion:
    Test system has now performed perfectly in the last 48 tests, including the four times where it attacked the researchers without warning, and one where it inexplicably ejected dirty oil on the seat of the head researcher."

    This unit can now be considered field ready, though there may be some difficulty tracking it if you take into account the system's autonomous nature and desire to remove it's identification badge.

  • They should try to imitate Pinky first. Would be easier.

    NARF!

  • When the masses get ahold of this and try getting it to scan the internet for pr0wn, and it responds "Not tonight, I have a headache..."...

  • Danger! (Score:4, Insightful)

    by jarrowwx ( 775068 ) on Friday November 21, 2008 @06:25PM (#25852095) Homepage
    I see some big issues with this.

    You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.

    Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!

    Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?

    The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!

    Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.
  • Title (Score:3, Funny)

    by coldtone ( 98189 ) on Friday November 21, 2008 @06:50PM (#25852439)

    Am I the only one that read DARPA's IBM-Led Neural Network Project Seeks Inmate Brain at first?

    • Re: (Score:3, Funny)

      Am I the only one that read DARPA's IBM-Led Neural Network Project Seeks Inmate Brain at first?

      Actually DARPA's lonely, they are looking for an intimate brain. 21 December 2012: the day they plug it into eHarmony.
  • The article is based on the IBM's press release and is misleading because of it. In fact, there are three competing teams - one lead by IBM, one lead by HP and one lead by HRL Laboratories [hrl.com]. See also the FBO website for more information about this program [fbo.gov].
  • The High Priests have already had there chance to do this and failed, repeatedly. Now they are just throwing more money at them.

    This should be an open grand challenge with clear rules like the autonomous vehicle challenge was.
    http://www.darpa.mil/grandchallenge/ [darpa.mil]

    Even I was surprised at how well they managed to get these cars to drive themselves.

    I am sure the same would happen with other AI problems if a large enough prize was put out there.

  • Deep Cat (Yeah, yeah, you pervs thought it would be another feline synonym.)
  • They could wind up with this cat [comiczone.com].
  • " ... The longer-term goal is to create a system with the level of complexity of a cat's brain.' ..."

    No, it's not.

  • Way simpler than a cat.

  • Did Rip van Winkle wake up from the neural network craze 20 years ago? We have next to zero clue about how memory and learning are done at the neural level and now someone arrogant is going to solve the problem? HA HA HA HA HA HA HA HA HA HA!
  • Too high level... (Score:2, Insightful)

    by Ubahs ( 1350461 )
    I'll shut up if this IBM project, or any project, can make something that can take: 2 webcams, 2 microphones, 1 speaker, 1000 pressure sensors...take all those raw inputs muxed together, and give me the ability to see video, hear in stereo, play a sound, and report on just a few pressure sensors. All this without anyone programming in specifics on each input. All via pattern recognition, genetic algorithms are fine, too. No human help beyond that... Oh, and quickly.

    That's one of the theories about how

  • The longer-term goal is to create a system with the level of complexity of a cat's brain.

    We can already model a cat's behavior [contactandcoil.com] in software.

  • A lot of publications have picked up this IBM press release, resulting in what must be some of the worst science reporting of the year. Modha and his colleagues at IBM have not simulated a mouse or rat brain. No one can do that at present; the wiring diagram isn't known at that level of detail.

    What they did was simulate a huge, randomly-wired network of grossly simplified "neurons" on a supercomputer. The number of units was roughly comparable to the number of neurons in rat cortex, and the statistics of s

On the eighth day, God created FORTRAN.

Working...