Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Science

Building Brainlike Computers 251

newtronic clues us to an article in IEEE Spectrum by Jeff Hawkins (founder of Palm Computing), titled Why can't a computer be more like a brain? Hawkins brings us up to date with his latest endeavor, Numenta. He covers progress since his book On Intelligence and gives details on Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Programming HTMs is different — you essentially feed them sensory data. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
This discussion has been archived. No new comments can be posted.

Building Brainlike Computers

Comments Filter:
  • by kripkenstein ( 913150 ) on Friday April 13, 2007 @01:01PM (#18720431) Homepage

    It's all been done before, perceptrons, multi-layered perceptrons, recurrent connections, etc, etc, etc...dunno why anybody would pay attention
    Well, yes and no. I think both you and the Numenta people are wrong about this (them saying that the failing of AI is that it ignores the brain). Here is my brief take on the history of AI and machine learning:

    First, AI ignored the brain. Then, Neural Networks took off in the 80's, and during the 90's were also the 'hot thing' in AI and machine learning. Basically, by using some 'brain-like' considerations, flexible learning systems could be built. These include perceptrons, etc. However, since then, neural networks have basically been made obsolete. Both from a theoretical and a practical standpoint, methods like support vector machines and boosting are far better than neural networks; these are the current state of the art. And they return us to the 'old AI' approach of ignoring the brain, in that they are NOT 'brain-like' in any significant way. Rather, they are natural algorithms that arise once you have a mature theory of machine learning (which, one might argue, science now has, with VC theory and later developments).

    I tried to read the Numenta stuff, but really I fail to see the 'point' in it. Basically all I want is to see that their methods outperform support vector machines - show me that, and I will be an instant convert. Until then, I remain skeptical.
  • by MOBE2001 ( 263700 ) on Friday April 13, 2007 @01:08PM (#18720537) Homepage Journal
    Because of the neocortex's uniform structure, neuro-scientists have long suspected that all its parts work on a common algorithm-that is, that the brain hears, sees, understands language, and even plays chess with a single, flexible tool. Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine. What it learns and what it can do are determined by the size of the neocortical sheet, what senses the sheet is connected to, and what experiences it is trained on. HTM is a theory of the neocortical algorithm.

    While I believe that the HTM is indeed a giant leap in AI (although I disagree with Numenta's Bayesian approach), I cannot help thinking that Hawkins is only addressing a small subset of intelligence. The neocortex is essentially a recognition machine but there is a lot more to brain and behavior than recognition. What is Hawkins' take on things like behavior selection, short and long-term memory, motor sequencing, motor coordination, attention, motivation, etc...?
  • by CogDissident ( 951207 ) on Friday April 13, 2007 @01:10PM (#18720561)
    He means it doesn't have desires and motives in a conventional sense. The way it works mathamatically means that it seeks the lowest value (or highest, depending on the AI) for the next nodal jump, and finds a path that leads to the most likely solution.

    This could be "converted" to traditional desires, meaning that if you taught it to find the most attractive woman, and gave it ranked values based on body features and what features are considered attractive in conjunction, it would "have" the "desire" to find the most beautiful woman in any given group.

    I'd say that researchers need to learn to put things into layman's terms, but all we need are good editors to put it into simpler terms, really.
  • Re:this is stupid (Score:3, Interesting)

    by cyphercell ( 843398 ) on Friday April 13, 2007 @01:45PM (#18721171) Homepage Journal
    http://www.transhumanist.com/volume1/moravec.htm [transhumanist.com]

    Ok, according to moore's law we will get there, with a transistor based computer. I believe the idea is to create the hardware equivelant of a neuron. Something like Asimov's positronic brain. Currently the modern computer is little more than a highly programmable calculator. The idea in this case is to create a computer that can learn or repurpose it's transistors/neurons.

    My colleagues and I have been pursuing that approach for several years. We've focused on the brain's neocortex, and we have made significant progress in understanding how it works. We call our theory, for reasons that I will explain shortly, Hierarchical Temporal Memory, or HTM. We have created a software platform that allows anyone to build HTMs for experimentation and deployment. You don't program an HTM as you would a computer; rather you configure it with software tools, then train it by exposing it to sensory data.

    The end goal is to create more advanced computers or software. You'd do better venting your religious frustrations against scientists in the genetics industry where the end goal is more advance people or thoughts.

  • Re:this is stupid (Score:2, Interesting)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday April 13, 2007 @01:47PM (#18721205) Homepage Journal

    The paragraph in question:

    "So, before you go off and read your king james edition of the bible, assuming your one of those blind-eyed, deaf-eared christians from the bible belt (and ooh boy, if you ever did any "real" research on the history of that thing you would know why so many people are becoming atheists), try using that brain you built yourself."

    What do you object to? His bigotry?

    A bigot can still have a point. If you research the history of the bible even just casually you discover that basically no part of it has managed to survive without being mangled.

    And of course there's the issue that there are no reputable and reliable references for the existence of Jesus, who created quite a stir during his brief life.

    Not to mention how heavily expurgated the bible is; for example large fragments of the gospel according to Mary Magdalene were uncovered. When are those going to be inserted into the bible?

    Even if you accept that the most significant events in the bible were real, you have to agree that the bible is at this point a horribly unreliable source.

  • by Anonymous Coward on Friday April 13, 2007 @02:39PM (#18722183)

    Neuroscientific theory [visitware.com] indicates that we will not be able to build truly brainlike computers until we have gone beyond serial, John-Neumann-bottleneck computers into the realm of massively parallel (maspar) hardware and software.

    Mind.Forth [sourceforge.net], a primitive but True AI, simulates the maspar human cortex by taking a few shortcuts based on the differences between neuronal wetware and computer hardware. For instance, Mind.Forth, unlike chatbots, has concepts. Whereas a brain will activate thousands of concept-neurons in parallel, Mind.Forth activates only the most recent instance of a concept, because computer hardware is more reliable in the short term than a single human nerve-fiber, which may be fatigued or even dead.

    AIMind-I.com [aimind-i.com] is another pretending-to-be-maspar artificial intelligence based on the original Mind.Forth design.

    Mind for MSIE [sourceforge.net] (for Microsoft Internet Explorer) is the JavaScript tutorial program (but still an albeit primitive True AI) that shows you how spreading activation flits from concept to concept in the serial computer pretendimg to be a maspar brain.

  • by totierne ( 56891 ) on Friday April 13, 2007 @02:48PM (#18722355) Homepage Journal
    I take drugs for bipolar tendency and have had 5 nervous breakdowns, so I have some ideas about how the brain goes wrong, I am afraid that the search for a perfect machine learning device may be a side track compared to explaining the mistakes the brain makes.

    I have an engineering degree and a masters specialising in machine learning - but that was 13 years ago, I would be delighted in more pointers of the state of the art

    http://www.cnbc.cmu.edu/Resources/disordermodels/ [cmu.edu] , on bipolar and neural networks, seemed promising at one stage but I had not the time, energy or rights to read the latest papers. [The web page is dated 1996]
  • by MOBE2001 ( 263700 ) on Friday April 13, 2007 @03:11PM (#18722799) Homepage Journal
    Actually just as much evidence contradicts that hypothesis. We have very specific brain areas for generating and processing verbal data (Broca and Wernicke's areas), and a very specific brain area for recognizing faces.

    In defence of Hawkins, note that he does not disagree (RTA) that there are specialized regions in the brain. However, this does not imply that the brain uses a different neural mechanism for different regions. It only means that a region that receives audio input will specialize in processing sounds. It all has to do with how the input and the output fibers are connected. The cortex will rewire itself to accomodate any sensory modality. IMO, Hawkins is right in this regard. Even specialized areas of the visual cortex that show a gradation of recognition capabilities can be explained using a hierarchical system heavily dependent on feedback.
  • by Anonymous Coward on Friday April 13, 2007 @03:54PM (#18723471)
    Well, since neural networks perform state-of-the-art results on numerous problems (see the works of Yann LeCun, Geoff Hinton or Ronan Collobert for instance), I wouldn't call them obsolete. They're also second in the Netflix prize contest.

    People don't use neural networks because they not as easy to train as SVM (given that you're given libSVM or equivalent). However, SVM are basically template matchers, which are good for problems where the number of samples is big compared to the dimensionnality of the problem (which is NOT the case for real world problems), but that's it.

    But using SVM just because the optimization is convex, no matter what the quality of the final solution is, just blows my mind. Besides, since we now know how to optimize deep networks (thanks to Toronto's lab and their Deep Belief Networks), I think neural nets will soon gather some interest again.

    My 2 neurons.
  • by Anonymous Coward on Friday April 13, 2007 @03:57PM (#18723513)
    I read "On Intelligence".
    It was pretty interesting indeed, but it's more about trying to emulate the basic features of perception (mapping of input patterns into symbols) rather than intelligence (a necessary first step).

    Anyone interested in consciousness and human intelligence (the next step) should consider reading Douglas Hofstadter's latest book "I am a strange loop".
  • by Black Parrot ( 19622 ) on Saturday April 14, 2007 @01:50PM (#18733015)

    However, in the very specific field of machine learning, virtually all papers published are about support vector machines and similar methods
    Sorry, but that is simply wrong. No, laughably wrong.

    Browse the ToCs of some recent journals and conference proceedings on ML, RL, EC, NN.

The one day you'd sell your soul for something, souls are a glut.

Working...