Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science

Cognitive Scientist David Rumelhart Dies At 68 24

dzou writes "David Rumelhart, a pioneer in building computer models of cognition and behavior, has died at the age of 68. Rumelhart conducted early research on artificial neural networks and helped develop the idea that cognition can be modeled through the interaction of many neuron-like units. In the 1980s, he was instrumental in developing neural networks that could learn to process information. At the time, although researchers understood how to train networks to solve linearly separable problems (like an AND gate), those networks could not solve linearly inseparable problems (like XOR), which would be crucial for modeling human cognitive processes. Rumelhart and his colleagues demonstrated that networks that solve these types of problems can be trained using the backpropagation learning algorithm. In turn, this has led to breakthroughs in areas like speech recognition and image processing, as well as models of human speech perception, language processing, vision, and higher-level cognition. Rumelhart suffered from Pick's disease in the last years of his life. An annual award in cognitive science, the David E. Rumelhart Prize, is given in his honor by the Glushko-Samuelson Foundation."
This discussion has been archived. No new comments can be posted.

Cognitive Scientist David Rumelhart Dies At 68

Comments Filter:
  • by Anonymous Coward

    and yet paris hilton lives to be a blight on the human race.

    • My mother has the same disease and she'll be 71 this year. It's a form of dementia that you get quite early, 65 on average. It is different to altzeimers in that you don't lose memory, sense of time and awareness of where you are. It changes your behaviour and character, you can either become really withdrawn or become strange and do inappropriate things in public.

  • http://gametheoryninja.com/2010/10/01/yes-singularity-on-the-big-bang-theory-woohoo/ [gametheoryninja.com]

    I am sure he is (or was) very disappointed that he could not upload his consciousness into a robot body... *sigh*

  • by Anonymous Coward

    Back when I was in university, I was taking Cognitive Psychology (a 2nd year course), and Artificial Intelligence (a 3rd year course) at the same time. It was never mentioned in my CS text, but my AI text described the importance placed on the actions of neurons by the Cognitive Psych. folk, and the view (among the Computer Science folk) that studying in high detail the actions of individual neurons and trying to understand how thoughts are created within an individual neuron is much like understanding fli

  • by zapatero ( 68511 ) on Saturday March 19, 2011 @06:00PM (#35545096) Journal

    In Speech Recognition and Text-To-Speech neural networks were the diregeur fashion, a long with lisp. But in the end it was Hidden Markov Models, and even Baysean modelling that ended up solving these AI-ish ventures.

    Neural Networks influenced a great deal of collateral and tangential research, the the Neural Networks themselves really went no where. They just weren't usable, not even for machine learning -- at least not in the "pure form".
       

    • by tgv ( 254536 )

      Indeed, but we also have to recognize that HMMs and all more modern "number crunching" techniques still haven't reached human-level performance, at least not in general language processing, and do not offer anything to solve other problems, such as reasoning. Rumelhart can be credited with reviving a large field of research, and for that he deserves our respect.

      • Indeed, but we also have to recognize that HMMs and all more modern "number crunching" techniques still haven't reached human-level performance, at least not in general language processing, and do not offer anything to solve other problems, such as reasoning.

        I submit to you, sir, a computer that had better-than-the-best human performance in the arena of general language processing and reasoning.
        http://www.techrepublic.com/blog/geekend/ibm-watson-supercomputer-beats-jeopardy-champs-in-practice-round/6546 [techrepublic.com]

        • by tgv ( 254536 )

          While technologically impressive, Watson is mainly association. It doesn't do much in terms of language processing; it's main strength lies in the speed with which it can search, evaluate and prune alternative associations. For psychology it's as interesting as Deep Blue.

        • by dzou ( 896434 )
          Watson also wasn't doing speech recognition (the original topic of this thread) when it played Jeopardy. It received the clues as text. They're working on it though [itjungle.com].
  • Underappreciated (Score:4, Informative)

    by RandCraw ( 1047302 ) on Saturday March 19, 2011 @09:14PM (#35546836)

    I think Dr Rumelhart should be remembered as a true founder of AI. While he wasn't there at the beginning (Dartmouth 1954), his work with McLelland, Hinton and Williams resurrected not just neural nets but in many ways the entire field of AI.

    In 1969, Marvin Minsky and Seymour Papert published "Perceptrons" which emphasized the inadequacy of simple single layer NNs and effectively discouraged further funding of NNs, thereby directing the mainstream of AI research monies into symbols and logic for the next 20 years. But by the mid 1980s, it had become clear that AI was not living up to the promises of its leading lights to quickly produce a thinking machine. It was Dr Rumelhart (among others like Grossberg) who further investigated and developed NNs, integrating novel reinforcement techniques (e.g. backpropagation) and thus grounding the field of AI more mathematically, or as it was also known then, subsymbolically.

    Since Dr Rumehart's work in the 1980s, subsymbolic AI has risen in importance as symbolic AI has fallen. Today, virtually all of AI research employs engineering techniques governed by increasingly sophisticated mathematical principles, and integrate feedback and learning, just as his work did. While I wouldn't claim Dr Rumelhart to be the father of modern AI, I would point out that the mathematics and machine learning central to his work correctly anticipated the current grounding of AI in both learning and numerical computation that reshaped and resurrected the field just as symbolic AI degenerated into the the "AI Winter" of the 1980s. Today's numerical AI researcher resembles subsymbolicists like Rumelhart significantly more than the renowned founders of AI, symbolicists all.

    • But can I talk to Watson? Only if I submit my question in the form of an answer, because that's what he's been trained on. The problem with statistics is that it can miss the "against all odds" truths. I think we need both statistical and symbolic approaches, a hybrid. Or use agents that use different approaches and let feedback from the user determine the most appropriate response for the particular context (and user preference)...

    • by tgv ( 254536 )

      I wish I could say the same about cognitive psychology. Rumelhart was --of course-- foremost a cognitive psychologist; AI latched on to ideas from him and likeminded colleagues. But where AI was very quick in recognizing the importance of emergent properties and subsymbolic computing and learning etc., many cognitive psychologists still think in symbolic terms and hard rules. The field consequently has hardly moved ahead since the 80s. To give an example from my own expertise: try to find a computational mo

  • Hamlet dies. Real people die just once. In this case it happened in the past relative to the utterance, so we use the past tense not the infinitive. He died. He is dead. Using the wrong tense does not make him less dead.

    Sophomoric excuses in 3... 2... 1....

    • by void*p ( 899835 )
      If you're going to be a pedant, at least know what you are talking about. Specifically, look up "infinitive". Also, use of the present tense does not imply that the action happened more than once.
  • Actually, linearly inseparable functions such as XOR can be learned by biologically plausible networks through the introduction of relatively small biologically inspired alterations of Hebb's rule that take the affects of locationally constrained resources into account (i.e. the availability of the chemical building blocks of neurons [i.e. trophic factors]).

You know you've landed gear-up when it takes full power to taxi.

Working...