Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Science

Building Brainlike Computers 251

newtronic clues us to an article in IEEE Spectrum by Jeff Hawkins (founder of Palm Computing), titled Why can't a computer be more like a brain? Hawkins brings us up to date with his latest endeavor, Numenta. He covers progress since his book On Intelligence and gives details on Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Programming HTMs is different — you essentially feed them sensory data. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
This discussion has been archived. No new comments can be posted.

Building Brainlike Computers

Comments Filter:
  • by Anonymous Coward on Friday April 13, 2007 @11:52AM (#18720251)
    Because it would signal the end of civilization...if computers can look like women (porn), feel like women (Realdolls), and think like women (have a brain, at least in some cases), then all procreation would cease and humans would suffer the same fate as the dinosaurs.
    • by morari ( 1080535 ) on Friday April 13, 2007 @12:05PM (#18720475) Journal
      Mystery solved! Dinosaurs went extinct because they developed super-sexy "DinoBots" and thus became disinterested in actual sex...
      • by trongey ( 21550 )

        Mystery solved! Dinosaurs went extinct because they developed super-sexy "DinoBots" and thus became disinterested in actual sex...

        I don't know how to break this too you gently so I'll just be blunt. If you're someone who is capable of developing super-sexy bots then it doesn't really matter whether you're interested in actual sex or not.
    • by Bluesman ( 104513 ) on Friday April 13, 2007 @12:08PM (#18720525) Homepage
      Careful . . . I don't think you want your super sexy real doll to think like an actual woman.

      That is, unless you want your old doll to get jealous of the new one and steal half your money and burn your house down.
    • by Xtravar ( 725372 )
      1. Design a computer that thinks "like a woman"
      2. Have computer lock itself in the bathroom, crying.
      3. ???
      4. End of civilization?
    • by 0racle ( 667029 )
      You got metal fever boy, metal fever!

      DON'T DATE ROBOTS!
    • Eh... Actually, we (guys) would end up having more sex with real chicks, not less, because they would become afraid of the competition. When the demand becomes saturated by computerized dolls, women will go to extraordinary lengths to get laid by a real dude. --a real dude
      • by fyngyrz ( 762201 ) *

        When the demand becomes saturated by computerized dolls, women will go to extraordinary lengths to get laid by a real dude.

        Oh, I don't know. When she can have a doll that has size-as-you-please parts, is 100% attentive, never gets tired, soft, rude, disinterested, or puts her aside for beer, football, or poker, never pines for or asks for sex acts she doesn't prefer, doesn't put her at risk for disease, pregnancy, heartbreak, political argument, never looks at other women... she may not be all that int

    • by 3waygeek ( 58990 )
      Well, we've already got computers that look like beavers [yourpsychogirlfriend.com], so it won't be long now...
  • ...but it's all in my head!
  • by alberion ( 1086629 ) on Friday April 13, 2007 @11:57AM (#18720345)
    ...comps get lazy and start reading /. instead of working?
  • Interesting, but... (Score:5, Informative)

    by Bob Hearn ( 61879 ) on Friday April 13, 2007 @11:59AM (#18720379) Homepage
    Hawkins' book On Intelligence is interesting reading. There are a lot of good ideas in there. From my perspective as an AI / neuroscience researcher, the main weakness in his approach is that he only thinks about the cortex, whereas many other brain structures, notably the basal ganglia, are increasingly becoming implicated as having a fundamental role in intelligence.

    This quote from the article is telling:

    HTM is not a model of a full brain or even the entire neo-cortex. Our system doesn't have desires, motives, or intentions of any kind. Indeed, we do not even want to make machines that are humanlike. Rather, we want to exploit a mechanism that we believe to underlie much of human thought and perception. This operating principle can be applied to many problems of pattern recognition, pattern discovery, prediction and, ultimately, robotics. But striving to build machines that pass the Turing Test is not our mission.
    Well, my goal is to build machines that pass the Turing Test, so I have to think about more than cortex. But more generally, one might wonder how much of intelligence it is possible to capture with a system that "doesn't have desires, motives, or intentions of any kind".
    • by CogDissident ( 951207 ) on Friday April 13, 2007 @12:10PM (#18720561)
      He means it doesn't have desires and motives in a conventional sense. The way it works mathamatically means that it seeks the lowest value (or highest, depending on the AI) for the next nodal jump, and finds a path that leads to the most likely solution.

      This could be "converted" to traditional desires, meaning that if you taught it to find the most attractive woman, and gave it ranked values based on body features and what features are considered attractive in conjunction, it would "have" the "desire" to find the most beautiful woman in any given group.

      I'd say that researchers need to learn to put things into layman's terms, but all we need are good editors to put it into simpler terms, really.
    • What do you think of his claim that the neocortex is built out of completely identical modules, which just end up getting programmed differently for their different functions? I'm not a neuroscientist, and reading the book, I found it difficult to judge how reliable a lot of his claims were, because he often failed to say what the evidence was. I also wasn't always sure which statements were controversial, which were his idiosyncratic ideas, etc.
      • Like a lot of neuroscience, that can be argued either way at present. Clearly there are some differences between the cortical regions, some of which are genetically determined, and others of which might arise through experience. Primary visual cortex, for example, is highly specialized. Anterior (frontal) cortex integrally involves basal ganglia for its function; posterior cortex does so only indirectly. But does all of cortex do essentially the same thing? We'd all love to know the answer to that one. A lo
    • I guess what you're asking is how much learning can a system achieve if it has no motivation. That's what you're left with if you don't put in "desires, motives or intentions...".

      It makes me think of fetuses. Isn't there learning before birth. I remember videos of fetuses sucking their thumbs and reacting to the light from the fiber optic camera. Clearly they have sensation, sucking their own thumb, and curiosity, reacting to outside stimulus. Is that what is missing?

      I don't know. I'm not in the fie

      • by Bob Hearn ( 61879 ) on Friday April 13, 2007 @12:28PM (#18720831) Homepage
        Yes, that's a big part of it. The basal ganliga form a giant reinforcement-learning system in the brain. Cortex on its own can perhaps learn to build hierarchical representations of sensory data, as Hawkins argues. But it can't learn how to perform actions that achieve goals without the basal ganglia. And in fact, there is a lot of evidence that suggests that sensory representation are refined and developed based on what is relevant to the brain's behavioal goals -- that the cortico-basal-ganglia loop contributes to sensory representation as well as motor, planning, intention, etc.
    • Off-Topic (Score:4, Funny)

      by oringo ( 848629 ) on Friday April 13, 2007 @12:13PM (#18720619)
      Please take your professional/scientific reviews to real scientific journals. Only bitter/ignorant jokes are acceptable on /.
    • From my perspective as an AI / neuroscience researcher, the main weakness in his approach is that he only thinks about the cortex...

      No disrespect, but don't you see the fact that none of you can all agree on anything (within this domain) as a bad sign? You're all bright individuals. But it's very similar to the fact that philosophers don't uniformly agree on any philsophical viewpoint. That's because there's nothing concrete that can be said in that domain, as has already been understood for centuries by

      • eh? Modern neuroscience is a fairly young field still, and there are plenty of things that are pretty accepted by the majority of researchers in the area. I can't list hundreds of topics that are accepted by everyone as 100% for sure answered no debate no takebacks because science doesn't work that way.
        "This is the best model we have at the present for this particular problem. Here are a list of problems with the theory(a, b, c...). Feel free to suggest refinements and contribute new analyses of it."
      • I agree with much of what you say. Yes, the brain was not manufactured based on any model or draft. Yes, the fact that we can't all agree on anything is, at least, a discouraging sign.

        However, neuroscience is different from philosphy, because there is a real, physical object there to study. New experimental techniques appear every year. Eventually, we will know how brains work.

        Neuroscience is different from physics, precisely because, as you have pointed out, brains are very messy things. Physicists are luc
      • The brain is an incredibly complex system that has been evolved and refined over millions of years. Just because progress is slow is no reason to think that progress is not being made.
    • Well, my goal is to build machines that pass the Turing Test, so I have to think about more than cortex. But more generally, one might wonder how much of intelligence it is possible to capture with a system that "doesn't have desires, motives, or intentions of any kind".

      Not to mention, motor coordination, attention, short and long-term memory, cerebellar processes, etc... It's a little more complicated than Hawkins would have us believe, especially if you don't already know the answers. Fortunately, the sec
    • Re: (Score:2, Funny)

      by cain ( 14472 )

      Hawkins' book On Intelligence is interesting reading.

      Please go on.

      There are a lot of good ideas in there.

      Would you like it if they were not a lot of good ideas in there?

      From my perspective as an AI / neuroscience researcher, the main weakness in his approach is that he only thinks about the cortex, whereas many other brain structures, notably the basal ganglia, are increasingly becoming implicated as having a fundamental role in intelligence.

      Why do you say your perspective as an ai neuroscience resea

    • by joto ( 134244 )

      the main weakness in his approach is that he only thinks about the cortex

      I guess what many of us wants, is a breakthrough.

      While it's possible that the brain really is of irreducible complexity, and that to get anything useful, you really have to emulate the full brain, that would be a pretty unique phenomenon in the history of science. Most inventions are not made by researchers sitting on their asses making up elaborate theories, until they suddenly starts to construct a pentium processor, or F-15 figh

  • I see quite a few comments on how the development of such technology is a threat to the human civilization, but on the contrary it can mean that "humanness" or what makes us humans (the way we think etc) can be propagated in the form of machines through the universe even after the end of our planet.. I don't think I will be less human if my mind/thought process were moved to an artificial system (say a robot) from my natural one, may be this is the next step in evolution, evolving away from flesh and bones
  • by sycodon ( 149926 ) on Friday April 13, 2007 @12:05PM (#18720471)
    Since they (scientists) don't really have a full understanding about how the brain works then it seems to me that building a computer to work like one is a litle far fetched.

    • Alchemy (Score:5, Insightful)

      by Weaselmancer ( 533834 ) on Friday April 13, 2007 @12:29PM (#18720853)

      Medievals didn't understand the atom or crystalline structures, but they still made carbonized steel for armour. They had the wrong ideas about exactly how metal became properly carbonized and tempered, but they still came up with correctly tempered spring-like steels (IIRC similar to tempered 1050) without getting any of the "why" of it right.

      I think someday we will be viewed as the medievals of AI. We occasionally make progress even though we really don't know what we're doing. Yet.

    • Re: (Score:2, Insightful)

      by noidentity ( 188756 )
      Richard Feynman's term " cargo cult science [wikipedia.org]" comes to mind.

      I think the educational and psychological studies I mentioned are
      examples of what I would like to call cargo cult science. In the
      South Seas there is a cargo cult of people. During the war they saw
      airplanes land with lots of good materials, and they want the same
      thing to happen now. So they've arranged to imitate things like
      runways, to put fires along the sides of the runways, to make a
      wooden hut for a man to sit in, with two wooden pieces on his head

    • by Bearpaw ( 13080 ) on Friday April 13, 2007 @12:45PM (#18721163)
      Hawkin's isn't trying to build a computer that works like a brain, anymore than the Wright brothers tried to build a plane that flew like a bird. They didn't need to "fully understand" how birds fly to get off the ground. All they needed was enough understanding to take what they could use -- wings, for instance -- and adapt it to an approach that didn't require feathers, hollow bones, and so on.

      Hawkins and the people he's working with have come up with an approach that lets people explore possible uses of allowing a machine to learn in a way that's inspired by a process that may be part of how humans learn. They don't need a "full understanding" of how the human brain works to do that.

    • Think of it as black-box reverse engineering: there's this thing, with inputs and an output, and you're trying to make something that has the same output given the inputs. You don't need to precisely re-implement the black box. Birds, bees, butterflies, and bats all have different ways of implementing flying: convergent evolution. 'Intelligence' is a far, far harder thing to implement. I'd argue that taking a black-box reverse engineering path is the wrong way to do it, but I don't think it's impossible
  • by MOBE2001 ( 263700 ) on Friday April 13, 2007 @12:08PM (#18720537) Homepage Journal
    Because of the neocortex's uniform structure, neuro-scientists have long suspected that all its parts work on a common algorithm-that is, that the brain hears, sees, understands language, and even plays chess with a single, flexible tool. Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine. What it learns and what it can do are determined by the size of the neocortical sheet, what senses the sheet is connected to, and what experiences it is trained on. HTM is a theory of the neocortical algorithm.

    While I believe that the HTM is indeed a giant leap in AI (although I disagree with Numenta's Bayesian approach), I cannot help thinking that Hawkins is only addressing a small subset of intelligence. The neocortex is essentially a recognition machine but there is a lot more to brain and behavior than recognition. What is Hawkins' take on things like behavior selection, short and long-term memory, motor sequencing, motor coordination, attention, motivation, etc...?
    • A giant leap in AI? I seriously doubt it, until given evidence to the contrary. But anyhow, lets consider TFA itself. Here is one example, from the quote you give from TFA:

      Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine.

      Actually just as much evidence contradicts that hypothesis. We have very specific brain areas for generating and processing verbal data (Broca and Wernicke's areas), and a very specific brain area for recognizing faces. There a
      • by MOBE2001 ( 263700 )
        Actually just as much evidence contradicts that hypothesis. We have very specific brain areas for generating and processing verbal data (Broca and Wernicke's areas), and a very specific brain area for recognizing faces.

        In defence of Hawkins, note that he does not disagree (RTA) that there are specialized regions in the brain. However, this does not imply that the brain uses a different neural mechanism for different regions. It only means that a region that receives audio input will specialize in processing
        • Re: (Score:3, Insightful)

          Perhaps, perhaps... but it just doesn't seem likely. Some brain tasks are linear/feedforward (V1, for example), while tasks such as language are inherently nonlinear. Postulating a single mechanism for both seems nonintuitive to me. But I readily admit that neuroscience doesn't have a way to decide between the two possibilities at present.
          • Re: (Score:3, Insightful)

            by MOBE2001 ( 263700 )
            Some brain tasks are linear/feedforward (V1, for example), while tasks such as language are inherently nonlinear. Postulating a single mechanism for both seems nonintuitive to me.

            I agree. I doubt that Hawkins can use his HTM to recognize (let alone understand the meaning of) full sentences. For that, you need a hippocampus, i.e., the ability to hold things in short-term memory (and play them back internally) and to parse events using a variable time-scale mechanism. You also need a mechanism of attention wh
      • by abes ( 82351 )
        I agree that it remains to be seen about any leap .. I haven't looked over the material very thoroughly, and I could only watch 15 minutes of a talk he gave at Google, but from what I saw it didn't look especially new. There was a lot of grandiose claims, which always sets off alarms for me. If you have a new idea, state it simply. Don't tell me it's a major breakthrough, how it will change the world, etc.

        Even more alarms go off whenever I see someone try to 'model the neocortex'. I'm in the camp that we kn
    • Re: (Score:3, Informative)

      by yali ( 209015 )

      Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine.

      I don't think that is anywhere close to representing the scientific consensus. A lot of scientists believe that the brain is specially adapted to solving specific problems [ucsb.edu] that were important for our ancestors' survival. For example, humans seem to solve logic problems involving social exchange [ucsb.edu] in very different ways, and using different neural circuitry, than problems that have the same formal-logic

  • Immitation doesnt result in the best engineering, even though Nature has invented amazing things.
    • Re: (Score:3, Informative)

      by simm1701 ( 835424 )
      Still based on birds though.

      early jumpbo jets used the landings of pigeons as a basis for example - those techniques are still used
  • We build brain-like computers. Are we then possible to make Insane computers? sociopath computers? or homocidal computers?
    • by masdog ( 794316 )
      I'm sorry, Lumpy, but I cannot allow you to think like that.
    • It's certainly possible. We make child soldiers and all sorts of things. I sure if we're brutal enough, we can make anything pretty psychotic.

      The key thing is that is a complete working system we're not "making" it anything in the sense we make a desktop computer do something. If we were, it wouldn't really be brain-like. Instead, we're causing something brain-like to have proto-experiences. When the hardware (and low-level software) gets to be far more brain-like, to the point where from a logical top
    • by joto ( 134244 )

      Are we then possible to make Insane computers? sociopath computers? or homocidal computers?

      Short answer: yes

      Long answer: Yes. But what do you mean by insane? It could be argued that someone who is, e.g. a sociopath or homicidal is simply someone who deviates from the norm. (In fact, many people, including prominent psychiatrists argue this view). One can then argue how far people must deviate before it's deemed they have a psychiatric disorder. The typical answer given by psychiatrists today is that it'

  • Good timing for this one :/
  • by totierne ( 56891 ) on Friday April 13, 2007 @01:48PM (#18722355) Homepage Journal
    I take drugs for bipolar tendency and have had 5 nervous breakdowns, so I have some ideas about how the brain goes wrong, I am afraid that the search for a perfect machine learning device may be a side track compared to explaining the mistakes the brain makes.

    I have an engineering degree and a masters specialising in machine learning - but that was 13 years ago, I would be delighted in more pointers of the state of the art

    http://www.cnbc.cmu.edu/Resources/disordermodels/ [cmu.edu] , on bipolar and neural networks, seemed promising at one stage but I had not the time, energy or rights to read the latest papers. [The web page is dated 1996]
  • Hasn't anybody read Dreyfus? It seems to me his critique of AI is the nail in the coffin for humanlike behavior. The basis for his arguments are that much of the human brain's calculations are non-representational, while everything a computer does is representational. Since computers must bind representations to things in order to manipulate them, they must contextualize the situation in order to determine how to apply representations... but in order to contextualize the situation you need to bind represent
  • There is no lack of hard AI problems that could benefit immensely from a decent AI solution. Examples abound:
    1. Self driving vehicles as in the DARPA driving challenge
    2. Computer vision for autonomous robots
    3. Image classification systems (even for desktop apps)
    4. Voice recognition systems

    Numenta claims HTMs are very powerful and yet instead of attacking one of these problems either as a demo (so they can be swept up by Google in the bat of an eye) or for profit, Numenta is giving away a dev. platform with
    • You may well be right. It would certainly be good to see a demo of it solving a complex real-world recognition task that defies other approaches. The stick figure demo is a proof of concept, but nothing more.

      On the other hand, it's also possible that Hawking has reasons to believe that it really does work, and just wants to seed the robot revolution by releasing this... Maybe his only real self-interest is in wanting to see intelligent robots/applications in his own lifetime, and he realizes that the best w
  • I have been thinking of this for a while, but all our brain does is run down a path to make a decision on weighted randoms and re-sort the indexes and weights based on the feedback. Kinda like the PageRanking, but more sophisticated.

    For example, to 'decide' whether an object is a ball:

    if(object = round){
    if(sphere = perfect){
    if(pattern = checkered){
    random([soccerball,art][weight10,weight1])
    }
    }
    }

    And deciding on the feedback we get: (if positive

Seen on a button at an SF Convention: Veteran of the Bermuda Triangle Expeditionary Force. 1990-1951.

Working...