DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170
An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"
Thought question.. (Score:4, Interesting)
Can a universal turing machine limitedly investigate another universal turing machine and detect halts and infinite loops? I can.
We can look at gunk like
10 Print "Hello"
20 goto 10
Yeah, that's a loop. But we can also look at graphs of y = sin(x) and understand why it repeats. I can also detect patterns and iterations that most likely go for infinity, else find a hole where the assumption falls apart. Last I checked, the computer cannot do that. Not yet, at least.
Re:It's already being done. (Score:3, Interesting)
Re:Yes, but can it beat the turk at chess? (Score:4, Interesting)
Actually organic brains in chips would have massive advantages over organic brains in meatspace. They could control other bodies, which are smaller, or stronger. They could be backed up, making them effectively indestructible.
Need a third arm ? Why not have it installed, 50% off this week !
Need to put down a building ? Why not hire this crane-like body that effortlessly lifts 5 tons.
Need to fly ? No problem !
That crawlspace with all those important network cables too small for you ? Well here's a smaller body.
Can't reach in there ? Can't see what you're doing in small space ? Why not have a special-purpose arm installed with a camera inside.
Want to colonize mars ? Bit of a downer not being able to breathe 99% of the way ? Why not turn yourself off ?
Colonize alpha centauri or even further ? No problem.
What this would enable "us" to do is to design new intelligent species to specifications. It would remove all limits that are not inherent to intelligence but are inherent in our bodies. There's quite a few limits like that ...
Re:Relevant to my interests (Score:3, Interesting)
Hopefully you'll work on your writing skills before sending the application away. Few universities admit illiterates.
You might be surprised... [10news.com]
Anonymous Coward (Score:1, Interesting)
I had a conversation with my logic teacher last week about AI ( He introduced the class with a brief talk about AI, in which he exposed that some authors considered that human like intelligence was unreachable ).
At the end of the class remembered that I casually printed something AI related while testing mi printer. Read it here on slashdot or somewhere, found it interesting and saved it in a .txt file.
I asked him if this was considered AI:
[quote]The most compelling case for AI I've seen was a complete accident. This story is from memory -- I read about it years ago.
To test whether a neural network could create an efficient program, researchers prepared a network (on a simulator) and then pitted it against a team of human programmers. The task was to write the most efficient program possible for an EEPROM that performed some simple task.
They tested both programs, and they both worked. They compared the code, and the code created by the neural net was a fraction of the size. But the code didn't make any sense and should not have functioned at all.
Also, they discovered that when they took the EEPROM to another location to show someone, the program didn't work anymore.
Eventually the figured out what happened. The neural network learned that the EEPROM could be used as an analog device as opposed to a digital one. It was using complex, unintended functions of the circuitry, like magnetic flux between the wires, to achieve the goal. But because these features weren't engineered, the relationships changed when they took the circuit to another location where the room temperature, barometric pressure, and other conditions were different.
But it shows that we can make machines that are "smarter" than we are -- machines that can end up achieving far more than they were intended to. For that reason, I don't think man will "invent" true AI. I think it will suddenly explode from some random and unintended relationship. It will grow like our own (biological) form of life. And then we just better hope it means us well.[/quote]
I think somebody else replied with this link (or I found it later while googling about it)
: http://www.newscientist.com/article.ns?id=dn2732 , which is quite similar but not the same.
Back to the conversation with my teacher, he said that it wasn't considered AI.
Then I asked him if it could be possible to model a human brain, the only problem that I found in such a feat, apart from the huge computational power required, was the complete understanding of the chemical reactions in a cell, and between celss, if you can model a chemical reaction, you can model a cell, if you can model a cell you can model tissues, cell signaling/neurotransmission,and so on...
He replied that at some point you'll hit the heisenberg uncertainty principle which I vaguely remembered.
The conclusion was that who knows if what makes us intelligent is unmeasurable or not, the only way would be trying to buld such a system.
By the way:
What do you know about neuroscience, any books articles/papers interesting to read about it ?
I have a brief understanding on how de neurotransmitters / neuroreceptors works.I know that it's still a huge area of reasearch, that they're so complex that instead of working with a certain neurorecpetor they're tackled in groups depending on which neurotransmitters affects them, for example serotonin and the 5-HT group.
Re:Thought question.. (Score:3, Interesting)
I see you've already been thoroughly refuted.
To add, nobody has shown that brains are NOT Turing machines. I've only heard one reasonably coherent argument that it might not be, and that is Penrose's suggestion (and derivatives) that the brain may depend on amplification of quantum uncertainty. Even if that were true, you simply build that into your AI. It might require you actually build your own neuron-like structures, or perhaps you can get away with a "quantum uncertainty co-processor" that your simulation refers to when it needs a shot of non-determinability.
WARNING, SPOILER (Score:3, Interesting)
Do note, however, that in the continued Asimov universe, mankind really didn't explode out into space until he disposed of the "robotic overlords". Those few cultures ['Spacers'] who held on to their robots slowly stagnated and died off.
Asimov's self-aware robots were never the violent, conquering overlords seen in many other sources of fiction (Terminator, Matrix), nor were they really human-equals (Star Wars, Star Trek), but were rather a crutch for mankind that man needed to discard to truly progress.
Also, please note that I am willfully ignoring anything in the Foundation Universe not written by Asimov, as well as Asimov's last book "Foundation and Earth", for reasons that anyone who has read it will clearly understand.
Re:Thought question.. (Score:3, Interesting)
It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).
"Trivial"? Only in trivial cases. Recent progress in static analysis and model checking notwithstanding, automating the general analysis of real-world programs -- analyses that programmers do every day (though of course, not always correctly) -- remains an important open problem.
So you're right that the Halting Problem doesn't prove that automating such analyses is impossible -- but it still remains beyond our abilities, even in cases where humans have little trouble.
Re:And then it becomes self-aware (Score:5, Interesting)
Not really. Unless it is sentient and is able to control it's patterns of thought in certain ways, it will not be capable of addressing the same lines of creativity no matter how "fast" the algorithm runs or how detached it is from other chores. There will be a set of functions that will lie outside its ability. Cats may be aware of themselves at a very primitive level, but reflecting on their own thoughts (which is crucial) seems a little far fetched. Certain apes, maybe. Or Dolphins. Heck, even they may be restricted somewhat in the reflective/understanding scheme of things. The topic is still shrouded in mystery.
Also realize that a major problem with this sentience business is how to keep it going. Lots of sci-fi (and academic) work work simply ignores the fact that a lot of what we do is fueled by emotions. It is quite possible that a sentient being without emotional drive could just stop thinking, or keep thinking the same things, even if you instill a memory in it. Why would it want to consider its environment, or humans controlling it, or the world, or any other concept? We may be able to think 'purely' sitting in an office, concentrating on some idea, but the necessities of life are what got us there to begin with, as well as some pleasure or desire to to obtain some knowledge..etc. If we didn't have that, if we didn't want to live because of all the drives we've evolved, I assure you suicide rates would hit the roof, and very little of what we can come up with/understand/achieve would have been as is. It's hard to replicate that in a machine.
Re:Thought question.. (Score:3, Interesting)
And conversely, static analysis tools often have little trouble finding cases that humans can't find on their own.
They are different. That humans can do things the computer can't currently do is not really very interesting.