DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170
An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"
And then it becomes self-aware (Score:5, Funny)
Upon becoming self-aware, the machine concludes, that its best shot at survival is to keep the host country prosperous and successful...
Any science-fiction authors exploring that turn of events?
Re: (Score:2)
Re: (Score:3, Insightful)
Now connecting the same machine up to life support, missile silos, command and control centers? THAT would be the SKYNET moment.
Re: (Score:2)
Well, if they're to be believed, we're actually already being run over by Terminators: 101s, 888s, the 1000-series, Shirley Manson, etc., etc.
I know you're joking but... (Score:3, Informative)
Yeah, Asimov did about 60 years ago.
Re: (Score:2)
You missed an awesome opportunity to name the book... It is not too late yet...
Re: (Score:2)
"The Evitable Conflict" in I Robot.
Re: (Score:2)
But the motivation there is different! In the scenario I meant, the machine would be helping its host country out of self-preservation (much like other citizens) — from The Third Law of Asimov's three. In the "Evitable Conflict", robots decide to do that out of concern for humans — The First Law...
Re: (Score:3, Informative)
Re: (Score:2)
Since it's a cat brain it will undoubtedly decide it's best shot at survival is to perform the minimum amount of sucking up necessary to keep the people who feed it happy, then eat them if they should stop feeding it.
I absolutely love the "meow" tag though.
Re: (Score:2, Funny)
Re: (Score:2)
The longer-term goal is to create a system with the level of complexity of a cat's brain.
"The system can't be accessed right now Sir."
"And why is that? This system cost millions. It better be working."
"Well, the system all of a sudden decided it needed to be in a different room, took off running, got scared by it's shadow and a blinking red light, and has spent the last few hours hiding under the couch in the basement. We tried to coax it out with a rabbit's foot keychain, but haven't yet been successful. Roger is trying a can of tuna fish."
Re: (Score:2)
Great book about a computer that becomes self aware and then tries to help its creator rule the colonized moon. The specs in the book weren't as good as what this will have though, but the results were better!
WARNING, SPOILER (Score:3, Interesting)
Do note, however, that in the continued Asimov universe, mankind really didn't explode out into space until he disposed of the "robotic overlords". Those few cul
Re: (Score:2)
"Upon becoming self-aware, the machine concludes, that its best shot at survival is to keep the host country prosperous and successful...
Any science-fiction authors exploring that turn of events?"
A Mind Forever Voyaging [elsewhere.org].
Joybooths are not the problem.
Re:And then it becomes self-aware (Score:5, Funny)
Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.
Re:And then it becomes self-aware (Score:4, Funny)
...and it will lick its USB interface.
Re: (Score:2)
Yes but a "cat brain" that operates at a thousand times the speed of a common house cat's will likely be able to learn how to out think us in shot order, mostly because it can use 100% of that 'brain" that it has, 24/7.
Re:And then it becomes self-aware (Score:5, Interesting)
Not really. Unless it is sentient and is able to control it's patterns of thought in certain ways, it will not be capable of addressing the same lines of creativity no matter how "fast" the algorithm runs or how detached it is from other chores. There will be a set of functions that will lie outside its ability. Cats may be aware of themselves at a very primitive level, but reflecting on their own thoughts (which is crucial) seems a little far fetched. Certain apes, maybe. Or Dolphins. Heck, even they may be restricted somewhat in the reflective/understanding scheme of things. The topic is still shrouded in mystery.
Also realize that a major problem with this sentience business is how to keep it going. Lots of sci-fi (and academic) work work simply ignores the fact that a lot of what we do is fueled by emotions. It is quite possible that a sentient being without emotional drive could just stop thinking, or keep thinking the same things, even if you instill a memory in it. Why would it want to consider its environment, or humans controlling it, or the world, or any other concept? We may be able to think 'purely' sitting in an office, concentrating on some idea, but the necessities of life are what got us there to begin with, as well as some pleasure or desire to to obtain some knowledge..etc. If we didn't have that, if we didn't want to live because of all the drives we've evolved, I assure you suicide rates would hit the roof, and very little of what we can come up with/understand/achieve would have been as is. It's hard to replicate that in a machine.
Re: (Score:2)
True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.
I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.
Re: (Score:3, Insightful)
True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.
I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.
We do not use a small percentage of our brains. I don't have the foggiest idea why this stupid myth perpetuates at all.
Re: (Score:2)
We do if you are talking about the average amount that is used simultaneously at its maximum. The thing is that our brains constantly multi-task, so it's not like some static 10-20%, it's more like a rapidly shuffling kaleidoscope of activity that ranges from 10-30% or so at any one moment. But our neurons and synapses do require some down-time. We can't just run at maximum all day long without suffering from headaches and fatigue. A machine has no problems at all - full power, 100% of the time, no sle
Re: (Score:2)
I liked the other guy's answer, but I've got a snarky one:
It is perpetuated by those same folks who only use 2% of their brains!
Thanks folks, try the veal, second show is at 11!
Re: (Score:2)
Re: (Score:2)
Nah. Many animals are self-aware. For example ravens and similar birds. And most importantly, as with the question if something is alive, there is no digital switch "reflects its own thoughts" or "does not reflect its own thoughts". It's a gradient. And many even simple animals can do some basic self-reflecting things.
The problem is the still existing arrogance of humans, with statements like "we are the most important lifeform", "the earth is the center of the universe", "we are alive", "only we are truly
Re: (Score:3, Funny)
Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.
This is bad. Very bad.
You all realize that when the cat spends 16 hours staring out the window, the whole time it's thinking "Someday, this will all be mine."
A cat AI is wa
Re: (Score:2)
The fat cat on the mat
may seem to dream
of nice mice that suffice
for him, or cream;
but he free, maybe,
walks in thought
unbowed, proud, where loud
roared and fought
his kin, lean and slim,
or deep in den
in the East feasted on beasts
and tender men.
The giant lion with iron
claw in paw,
and huge ruthless tooth
in gory jaw;
the pard dark-starred,
fleet upon feet,
that oft soft from aloft
leaps upon his meat
where woods loom in gloom --
far now they be,
fierce and free,
and tamed is he;
but fat cat on the mat
kept as a pet
he does not forget.
JRR Tolkien
Re: (Score:2)
Tevildo was a Maia in the Tale of TinÃviel who was called the "Lord of Cats". He appeared in the form of a great black cat, captured Beren during the Quest for the Silmaril, and was defeated by Huan and LÃthien.
Later he was replaced in the legendarium by Thà (later renamed Sauron), the "Lord of Werewolves". The cat-versus-dog theme prominent in the Tale of TinÃviel was thus eliminated in later writings.
Too bad it was cut. There is almost nothing in Tolkien's works about cats [tolkiengateway.net] at all, as opposed to many dogs and wolves. Also interesting:
Especially in the case of BerÃthiel and Tevildo, cats in Middle-earth are portrayed in a negative light. It could be argued that Tolkien was not a cat-person. When a cat-breeder asked permission to use names from The Lord of the Rings for her cats, Tolkien replied to them:
"I fear that to me Siamese cats belong to the fauna of Mordor, but you need not tell the cat breeder that."
Re: (Score:2)
Have you met Aineko [wikipedia.org]? ;)
Re: (Score:2)
Can you guys read? CAT BRAIN.
This is bad for us. Very bad. Remember, the ancient Egyptians worshipped cats like they were gods. Cats have never forgotten this fact.
Re: (Score:2)
and getting fur everywhere, especially on the clothes of the one guy in the department who's allergic to cats.
Re: (Score:2)
and you'll never find the bottlecap of your drink, syrup container, OJ bottle, etc. ever again...
Re: (Score:2)
Not that hard. A spray bottle filled with water is a good training tool for most cats.
Re: (Score:1)
Relevant to my interests (Score:1)
ETHICS??? (Score:2)
To draw a parallel, I just wonder if we'd consider locking a cat in a dark room so small that it can't move, see or hear would be considered ethical. Then what if we removed its body entirely - is that somehow less cruel?
I consider AI research to be critical, so I don't know what the solution is, but this situation is worthy of the question...
Re: (Score:2)
And yet, next year I will take AI as my specialization...
Re: (Score:2)
I just hope that if o
Re: (Score:2)
Re: (Score:2)
Perhaps to trap the bacteria, viruses, dust, and dirt in the air we breathe and prevent it from accumulating in our lungs and choking us to death? Be glad you have snot.
Re: (Score:2, Funny)
Re: (Score:3, Interesting)
Hopefully you'll work on your writing skills before sending the application away. Few universities admit illiterates.
You might be surprised... [10news.com]
Re: (Score:2)
Yale Leaves No Illiterate Behind.
http://www.nndb.com/people/360/000022294/ [nndb.com]
Cat's brain? WTF? (Score:5, Funny)
This is intuited by the stupid humans in their cliche "Dogs have masters, Cats have staff". We work for the cats.
So, trying to model a cat's brain is both too complex for computers (try and herd cats) and too simple (try and herd pointy haired bosses). The contradiction results in the computer overheating and exploding.
and when the researcher gets home, blubbering about the 'sploded computer to his wife, the dog says "LOVE ME LOVE ME LOVE!!!! TAKE ME ON WALKIES!!!" and the cat says "Get my fucking dinner, you stupid ass. Maybe I will deign to let you pet me. After I do my rounds. Maybe."
RS
What? A cat's brain? (Score:2)
Seriously? They are shooting WAY higher than simply Artificial Intelligence that mimics humans. Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are? Will this computer need a Litter box and Catnip?
Saw a great cartoon on that subject. (Score:2)
Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are?
Saw a great cartoon on that.
- Cat sitting on shelf, staring into space.
- Couple wondering aloud what deep thought are running through its head.
- Thought balloon over cat's head containing a TV test pattern.
EEEEEEeeeeeeeeeeeeeee.......
It's already being done. (Score:3, Informative)
Re: (Score:3, Interesting)
Thought question.. (Score:4, Interesting)
Can a universal turing machine limitedly investigate another universal turing machine and detect halts and infinite loops? I can.
We can look at gunk like
10 Print "Hello"
20 goto 10
Yeah, that's a loop. But we can also look at graphs of y = sin(x) and understand why it repeats. I can also detect patterns and iterations that most likely go for infinity, else find a hole where the assumption falls apart. Last I checked, the computer cannot do that. Not yet, at least.
Re: (Score:2)
There's no proof that it can't.
A computer can easily find that your program will continue forever. It can also understand that sin repeats.
Re: (Score:2)
Re: (Score:2)
That only applies to arbitrary programs. The key word in the the wiki article sentence which reads "Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist" is the one that was already emphasized. Obviously it is possible for a program to decide that a trivial program halts. With code flow graph analysis, it is even possible to decide for somewhat complicated programs. It becomes intractable at roughly the same point where it bec
Re: (Score:3, Interesting)
I see you've already been thoroughly refuted.
To add, nobody has shown that brains are NOT Turing machines. I've only heard one reasonably coherent argument that it might not be, and that is Penrose's suggestion (and derivatives) that the brain may depend on amplification of quantum uncertainty. Even if that were true, you simply build that into your AI. It might require you actually build your own neuron-like structures, or perhaps you can get away with a "quantum uncertainty co-processor" that your simu
Re: (Score:2)
That's very scientific of you.
Re: (Score:2)
Does this program terminate, smartypants? (Score:2)
void main() {
int x = 0;
while (isTheGodelNumberOfAValidDerivationOfTheRiemannZetaHypothesis(x) == 0)
x++;
return x;
}
Re: (Score:3, Insightful)
You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).
It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited tim
Re: (Score:3, Interesting)
It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).
"Trivial"? Only in trivial cases. Recent progress in static analysis and model checking notwithstanding, automating the general analysis of real-world programs -- analyses that programmers do every day (though of course, not always correctly) -- remains an important open problem.
So you're right that the Halting Problem doesn't prove that automating such analyses is impossible -- but it still remains beyond our abilities, even in cases where humans have little trouble.
Re: (Score:3, Interesting)
And conversely, static analysis tools often have little trouble finding cases that humans can't find on their own.
They are different. That humans can do things the computer can't currently do is not really very interesting.
How about this one... (Score:2)
20 b=17
30 a=(a+27389) mod 527
40 b=(b+98372) mod 3991
50 if a!=b goto 30
Will it halt?
Re: (Score:2)
Yes.
A = B = 501 at step number 57515.
Re: (Score:2)
To the best of my knowledge, neither can a cat.
better than... (Score:1)
Re: (Score:2)
"Yes, Brain, I think so, but who's going to paint all the ponies pink?"
Re: (Score:2)
I thought the same thing! NARF! POIT!!!
The problem is that the IBM network will continually ask "Are you pondering what I'm pondering?"
Who wants to be (Score:2, Funny)
Yes, but can it beat the turk at chess? (Score:5, Insightful)
Re:Yes, but can it beat the turk at chess? (Score:4, Interesting)
Actually organic brains in chips would have massive advantages over organic brains in meatspace. They could control other bodies, which are smaller, or stronger. They could be backed up, making them effectively indestructible.
Need a third arm ? Why not have it installed, 50% off this week !
Need to put down a building ? Why not hire this crane-like body that effortlessly lifts 5 tons.
Need to fly ? No problem !
That crawlspace with all those important network cables too small for you ? Well here's a smaller body.
Can't reach in there ? Can't see what you're doing in small space ? Why not have a special-purpose arm installed with a camera inside.
Want to colonize mars ? Bit of a downer not being able to breathe 99% of the way ? Why not turn yourself off ?
Colonize alpha centauri or even further ? No problem.
What this would enable "us" to do is to design new intelligent species to specifications. It would remove all limits that are not inherent to intelligence but are inherent in our bodies. There's quite a few limits like that ...
Re: (Score:2)
Self contained means it has to have a ton of backup, self-repair, and maintance systems.
Sounds like any other computing effort. Including your desktop, it requires varying degrees of maintenance to remain functional.
Close enough is good enough. As such, I don't see how duplicating an organic brain is useful.
Except you fail to account for situations where nature [wikipedia.org] far out processes our current iteration of computational devices. Like those damn CAPTCHAs... [wikipedia.org]
Re: (Score:2)
Except you fail to account for situations where nature far out processes our current iteration of computational devices. Like those damn CAPTCHAs...
Captchas are a pretty bad example, since they're almost all broken. The ones that aren't broken often take multiple guesses from a human as well. In that respect we are better only be the most minute of margins.
Re: (Score:2)
In that respect we are better only be the most minute of margins.
Whoops. I guess that only adds to my point...
Re: (Score:2)
Really? You don't see any use in having a computer that can read handwriting perfectly(document conversion)? That can recognize faces(security)? That can semantically organize conceptual content(organizing the web?) That can problem-solve intuitively(anything)? That can plan ahead? That can understand our natural language? If we successfully run a simulation of a human brain on a computer(presumably we would have a go at this after succeeding with the cat's brain), it would solve all of these problems. And
Re: (Score:2)
Duplicating an organic brain is useful in the same way that it is useful for a toddler to imitate his parents.
A toddler does not understand the actions of his parents but he imitates them anyway because it is a very good learning strategy - learning by doing. As the toddler grows older and more experienced he will typically also learn the hows and whys (although not always, even into adulthood) through his actions.
Similarly, the researchers at IBM represent humanity's understanding of the brain and intellig
Re: (Score:2)
Simulatneously , being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough.
QED
Way to lower expectations (Score:4, Funny)
Summary of Test 49:
The robot sensors were properly tracking the missile when suddenly it decided it was time to run bats***-crazy all over the room before perching ontop of a cabinet, turning upside down, and apparently following non-existent bugs across the wall with it's cameras.
Test 49 Results:
System performed as expected.
Conclusion:
Test system has now performed perfectly in the last 48 tests, including the four times where it attacked the researchers without warning, and one where it inexplicably ejected dirty oil on the seat of the head researcher."
This unit can now be considered field ready, though there may be some difficulty tracking it if you take into account the system's autonomous nature and desire to remove it's identification badge.
Imitate Brain? (Score:2)
They should try to imitate Pinky first. Would be easier.
NARF!
Great, I can hear the complaints already..... (Score:2)
When the masses get ahold of this and try getting it to scan the internet for pr0wn, and it responds "Not tonight, I have a headache..."...
Danger! (Score:4, Insightful)
You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.
Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!
Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?
The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!
Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.
Re: (Score:2)
Mmm, torture is a great way to instill gentleness and a respect for life.
Re: (Score:2)
"with a computer simulation of the working thing, even if we don't understand it, we can at least slow it down and toy around with things/try things out/change things and then run it again, and make some progress towards understanding why it does what it does. "
Quite. And what, I wonder, might that process of experimentation *feel* like to the simulated mind in question?
Read Greg Bear's 'Copy' stories if you want some nice nightmare fuel.
Re: (Score:2)
D'oh. I meant Greg Egan, of course.
Greg Bear also has simulated virtual humans in his stories, spawned and killed on demand by their carbon-based masters, but nerfs the bleakness of what that sort of existence would be like.
Title (Score:3, Funny)
Am I the only one that read DARPA's IBM-Led Neural Network Project Seeks Inmate Brain at first?
Re: (Score:3, Funny)
Actually DARPA's lonely, they are looking for an intimate brain. 21 December 2012: the day they plug it into eHarmony.
Not just IBM - HP and HRL too. (Score:2)
This really should be a Grand Challenge (Score:2)
The High Priests have already had there chance to do this and failed, repeatedly. Now they are just throwing more money at them.
This should be an open grand challenge with clear rules like the autonomous vehicle challenge was.
http://www.darpa.mil/grandchallenge/ [darpa.mil]
Even I was surprised at how well they managed to get these cars to drive themselves.
I am sure the same would happen with other AI problems if a large enough prize was put out there.
This really should be a Grand Flop. (Score:2)
"I am sure the same would happen with other AI problems if a large enough prize was put out there."
Give me a million dollars and I can solve the problem of why geeks don't get dates.
Re: (Score:2)
I dunno, a million dollars didn't help me much in that department, unless you want to rent a date by the hour.
Code name: (Score:2)
Unintended Consequences (Score:2)
Department of You-Cant-Fool-Me (Score:2)
" ... The longer-term goal is to create a system with the level of complexity of a cat's brain.' ..."
No, it's not.
Mod Parent Up (Score:2)
Model an Indian call center reps brain (Score:2)
Way simpler than a cat.
Blast from the Past (Score:2)
Too high level... (Score:2, Insightful)
That's one of the theories about how
We can already immitate a cat in software (Score:2)
The longer-term goal is to create a system with the level of complexity of a cat's brain.
We can already model a cat's behavior [contactandcoil.com] in software.
Randomly-wired networks do not make a brain (Score:2)
A lot of publications have picked up this IBM press release, resulting in what must be some of the worst science reporting of the year. Modha and his colleagues at IBM have not simulated a mouse or rat brain. No one can do that at present; the wiring diagram isn't known at that level of detail.
What they did was simulate a huge, randomly-wired network of grossly simplified "neurons" on a supercomputer. The number of units was roughly comparable to the number of neurons in rat cortex, and the statistics of s
Re: (Score:2)
It's when the dead kittens start turning up scattered around the lab that they should start worrying.
Re: (Score:2)
Help Desk: "I can haz cheezburger?"
Experiment comes before theory (Score:2)
Aristotle had a great theory on gravitation [st-and.ac.uk]. He even *invented* the word "gravitation". His theory stood undisputed for two thousand years. It was considered absolute truth. There was only one problem: it was a WRONG theory.
It was only after Galileo invented a method to measure the speed and acceleration of falling bodies that the foundations were laid for Newton's theory of gravitation. And it was Michelson's experiments showing small discrepancies in measuring the speed of light that allowed Einstein to d
Re: (Score:2)
You may be thinking of this work [netscrap.com], also reported in New Scientist.