Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Data Storage Science

Building Brainlike Computers 251

newtronic clues us to an article in IEEE Spectrum by Jeff Hawkins (founder of Palm Computing), titled Why can't a computer be more like a brain? Hawkins brings us up to date with his latest endeavor, Numenta. He covers progress since his book On Intelligence and gives details on Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Programming HTMs is different — you essentially feed them sensory data. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
This discussion has been archived. No new comments can be posted.

Building Brainlike Computers

Comments Filter:
  • Interesting, but... (Score:5, Informative)

    by Bob Hearn ( 61879 ) on Friday April 13, 2007 @12:59PM (#18720379) Homepage
    Hawkins' book On Intelligence is interesting reading. There are a lot of good ideas in there. From my perspective as an AI / neuroscience researcher, the main weakness in his approach is that he only thinks about the cortex, whereas many other brain structures, notably the basal ganglia, are increasingly becoming implicated as having a fundamental role in intelligence.

    This quote from the article is telling:

    HTM is not a model of a full brain or even the entire neo-cortex. Our system doesn't have desires, motives, or intentions of any kind. Indeed, we do not even want to make machines that are humanlike. Rather, we want to exploit a mechanism that we believe to underlie much of human thought and perception. This operating principle can be applied to many problems of pattern recognition, pattern discovery, prediction and, ultimately, robotics. But striving to build machines that pass the Turing Test is not our mission.
    Well, my goal is to build machines that pass the Turing Test, so I have to think about more than cortex. But more generally, one might wonder how much of intelligence it is possible to capture with a system that "doesn't have desires, motives, or intentions of any kind".
  • Re:this is stupid (Score:5, Informative)

    by DragonWriter ( 970822 ) on Friday April 13, 2007 @01:11PM (#18720583)

    I'm just saying that the human brain is a thing made by god, and we can't copy it.

    How does that follow? Granting, for the sake of discussion, that everything in the natural universe, including brains, was created by God, that hardly implies that we can't copy brains. We can reproduce many naturally occurring things, after all, through understanding their structure and composition.

    Diamonds are things made by God, and we can copy them.
  • by simm1701 ( 835424 ) on Friday April 13, 2007 @01:17PM (#18720667)
    Still based on birds though.

    early jumpbo jets used the landings of pigeons as a basis for example - those techniques are still used
  • by Bob Hearn ( 61879 ) on Friday April 13, 2007 @01:28PM (#18720831) Homepage
    Yes, that's a big part of it. The basal ganliga form a giant reinforcement-learning system in the brain. Cortex on its own can perhaps learn to build hierarchical representations of sensory data, as Hawkins argues. But it can't learn how to perform actions that achieve goals without the basal ganglia. And in fact, there is a lot of evidence that suggests that sensory representation are refined and developed based on what is relevant to the brain's behavioal goals -- that the cortico-basal-ganglia loop contributes to sensory representation as well as motor, planning, intention, etc.
  • by yali ( 209015 ) on Friday April 13, 2007 @02:17PM (#18721795)

    Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine.

    I don't think that is anywhere close to representing the scientific consensus. A lot of scientists believe that the brain is specially adapted to solving specific problems [ucsb.edu] that were important for our ancestors' survival. For example, humans seem to solve logic problems involving social exchange [ucsb.edu] in very different ways, and using different neural circuitry, than problems that have the same formal-logical structure but that don't involve detecting social cheaters.

  • by kripkenstein ( 913150 ) on Saturday April 14, 2007 @02:30PM (#18733367) Homepage
    Hmm, I see that I might have been easily misunderstood. I meant to say that SVMs dominated the field of classification. Obviously ML journals are full of other topics (unsupervised learning, etc.). But the great majority of publications in classification are about SVMs and related tools (boosting, etc.). At least in the journals I read (JML, JMLR, for example).

"my terminal is a lethal teaspoon." -- Patricia O Tuama