Building Brainlike Computers 251
newtronic clues us to an article in IEEE Spectrum by Jeff Hawkins (founder of Palm Computing), titled Why can't a computer be more like a brain? Hawkins brings us up to date with his latest endeavor, Numenta. He covers progress since his book On Intelligence and gives details on Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Programming HTMs is different — you essentially feed them sensory data. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
Re:been there, done that... (Score:3, Interesting)
First, AI ignored the brain. Then, Neural Networks took off in the 80's, and during the 90's were also the 'hot thing' in AI and machine learning. Basically, by using some 'brain-like' considerations, flexible learning systems could be built. These include perceptrons, etc. However, since then, neural networks have basically been made obsolete. Both from a theoretical and a practical standpoint, methods like support vector machines and boosting are far better than neural networks; these are the current state of the art. And they return us to the 'old AI' approach of ignoring the brain, in that they are NOT 'brain-like' in any significant way. Rather, they are natural algorithms that arise once you have a mature theory of machine learning (which, one might argue, science now has, with VC theory and later developments).
I tried to read the Numenta stuff, but really I fail to see the 'point' in it. Basically all I want is to see that their methods outperform support vector machines - show me that, and I will be an instant convert. Until then, I remain skeptical.
Recognition Is a Small Part of the Problem (Score:3, Interesting)
While I believe that the HTM is indeed a giant leap in AI (although I disagree with Numenta's Bayesian approach), I cannot help thinking that Hawkins is only addressing a small subset of intelligence. The neocortex is essentially a recognition machine but there is a lot more to brain and behavior than recognition. What is Hawkins' take on things like behavior selection, short and long-term memory, motor sequencing, motor coordination, attention, motivation, etc...?
Re:Interesting, but... (Score:5, Interesting)
This could be "converted" to traditional desires, meaning that if you taught it to find the most attractive woman, and gave it ranked values based on body features and what features are considered attractive in conjunction, it would "have" the "desire" to find the most beautiful woman in any given group.
I'd say that researchers need to learn to put things into layman's terms, but all we need are good editors to put it into simpler terms, really.
Re:this is stupid (Score:3, Interesting)
Ok, according to moore's law we will get there, with a transistor based computer. I believe the idea is to create the hardware equivelant of a neuron. Something like Asimov's positronic brain. Currently the modern computer is little more than a highly programmable calculator. The idea in this case is to create a computer that can learn or repurpose it's transistors/neurons.
The end goal is to create more advanced computers or software. You'd do better venting your religious frustrations against scientists in the genetics industry where the end goal is more advance people or thoughts.
Re:this is stupid (Score:2, Interesting)
The paragraph in question:
"So, before you go off and read your king james edition of the bible, assuming your one of those blind-eyed, deaf-eared christians from the bible belt (and ooh boy, if you ever did any "real" research on the history of that thing you would know why so many people are becoming atheists), try using that brain you built yourself."
What do you object to? His bigotry?
A bigot can still have a point. If you research the history of the bible even just casually you discover that basically no part of it has managed to survive without being mangled.
And of course there's the issue that there are no reputable and reliable references for the existence of Jesus, who created quite a stir during his brief life.
Not to mention how heavily expurgated the bible is; for example large fragments of the gospel according to Mary Magdalene were uncovered. When are those going to be inserted into the bible?
Even if you accept that the most significant events in the bible were real, you have to agree that the bible is at this point a horribly unreliable source.
Mind.Forth AI Simulates Brainlike Computers (Score:1, Interesting)
Neuroscientific theory [visitware.com] indicates that we will not be able to build truly brainlike computers until we have gone beyond serial, John-Neumann-bottleneck computers into the realm of massively parallel (maspar) hardware and software.
Mind.Forth [sourceforge.net], a primitive but True AI, simulates the maspar human cortex by taking a few shortcuts based on the differences between neuronal wetware and computer hardware. For instance, Mind.Forth, unlike chatbots, has concepts. Whereas a brain will activate thousands of concept-neurons in parallel, Mind.Forth activates only the most recent instance of a concept, because computer hardware is more reliable in the short term than a single human nerve-fiber, which may be fatigued or even dead.
AIMind-I.com [aimind-i.com] is another pretending-to-be-maspar artificial intelligence based on the original Mind.Forth design.
Mind for MSIE [sourceforge.net] (for Microsoft Internet Explorer) is the JavaScript tutorial program (but still an albeit primitive True AI) that shows you how spreading activation flits from concept to concept in the serial computer pretendimg to be a maspar brain.
What mistakes do machine learning machines make? (Score:3, Interesting)
I have an engineering degree and a masters specialising in machine learning - but that was 13 years ago, I would be delighted in more pointers of the state of the art
http://www.cnbc.cmu.edu/Resources/disordermodels/ [cmu.edu] , on bipolar and neural networks, seemed promising at one stage but I had not the time, energy or rights to read the latest papers. [The web page is dated 1996]
In Defence of Hawkins (Score:3, Interesting)
In defence of Hawkins, note that he does not disagree (RTA) that there are specialized regions in the brain. However, this does not imply that the brain uses a different neural mechanism for different regions. It only means that a region that receives audio input will specialize in processing sounds. It all has to do with how the input and the output fibers are connected. The cortex will rewire itself to accomodate any sensory modality. IMO, Hawkins is right in this regard. Even specialized areas of the visual cortex that show a gradation of recognition capabilities can be explained using a hierarchical system heavily dependent on feedback.
Re:been there, done that... (Score:2, Interesting)
People don't use neural networks because they not as easy to train as SVM (given that you're given libSVM or equivalent). However, SVM are basically template matchers, which are good for problems where the number of samples is big compared to the dimensionnality of the problem (which is NOT the case for real world problems), but that's it.
But using SVM just because the optimization is convex, no matter what the quality of the final solution is, just blows my mind. Besides, since we now know how to optimize deep networks (thanks to Toronto's lab and their Deep Belief Networks), I think neural nets will soon gather some interest again.
My 2 neurons.
Re:Interesting, but... (Score:1, Interesting)
It was pretty interesting indeed, but it's more about trying to emulate the basic features of perception (mapping of input patterns into symbols) rather than intelligence (a necessary first step).
Anyone interested in consciousness and human intelligence (the next step) should consider reading Douglas Hofstadter's latest book "I am a strange loop".
Re: been there, done that... (Score:4, Interesting)
Browse the ToCs of some recent journals and conference proceedings on ML, RL, EC, NN.