Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
AI Science

Cognitive Scientist David Rumelhart Dies At 68 24

dzou writes "David Rumelhart, a pioneer in building computer models of cognition and behavior, has died at the age of 68. Rumelhart conducted early research on artificial neural networks and helped develop the idea that cognition can be modeled through the interaction of many neuron-like units. In the 1980s, he was instrumental in developing neural networks that could learn to process information. At the time, although researchers understood how to train networks to solve linearly separable problems (like an AND gate), those networks could not solve linearly inseparable problems (like XOR), which would be crucial for modeling human cognitive processes. Rumelhart and his colleagues demonstrated that networks that solve these types of problems can be trained using the backpropagation learning algorithm. In turn, this has led to breakthroughs in areas like speech recognition and image processing, as well as models of human speech perception, language processing, vision, and higher-level cognition. Rumelhart suffered from Pick's disease in the last years of his life. An annual award in cognitive science, the David E. Rumelhart Prize, is given in his honor by the Glushko-Samuelson Foundation."
This discussion has been archived. No new comments can be posted.

Cognitive Scientist David Rumelhart Dies At 68

Comments Filter:
  • by Anonymous Coward on Saturday March 19, 2011 @01:09PM (#35542692)

    Back when I was in university, I was taking Cognitive Psychology (a 2nd year course), and Artificial Intelligence (a 3rd year course) at the same time. It was never mentioned in my CS text, but my AI text described the importance placed on the actions of neurons by the Cognitive Psych. folk, and the view (among the Computer Science folk) that studying in high detail the actions of individual neurons and trying to understand how thoughts are created within an individual neuron is much like understanding flight by doing an intense, thorough study of a birds feathers. Its nice to know about feathers and neurons, but there are other things going on that might be a bit more important that are being totally missed. The A.I. prof. mentioned this, the C.P. prof. didn't. Otherwise, there was a massive amount of overlap in the two courses (one guy I knew from both classes mentioned this to me at exam time, and I agreed, with one bit of caution that some of the terms are different, even though the describe processes that are identical. But it was good, study once, and take two different tests. I also remember being given a problem to solve for the CP class 'just think about the problem and try to work a solution, not for homework or anything, just think about it', and in the same week being given the same problem in AI, except instead of solving it yourself, you had to write three programs and get the computer to solve it in three different ways (depth-first-search, breadth-first-search, best-first-search). It was homework. I worked it out, but the solution the computer came up with was a tautology. I initially looked at the problem and went through the steps the computer took, trying to debug what went wrong. About 3/4 of the way through, I realised that it was a tautology. Oh, BTW, it was the Missionaries and cannibals problem.

  • Underappreciated (Score:4, Informative)

    by RandCraw ( 1047302 ) on Saturday March 19, 2011 @10:14PM (#35546836)

    I think Dr Rumelhart should be remembered as a true founder of AI. While he wasn't there at the beginning (Dartmouth 1954), his work with McLelland, Hinton and Williams resurrected not just neural nets but in many ways the entire field of AI.

    In 1969, Marvin Minsky and Seymour Papert published "Perceptrons" which emphasized the inadequacy of simple single layer NNs and effectively discouraged further funding of NNs, thereby directing the mainstream of AI research monies into symbols and logic for the next 20 years. But by the mid 1980s, it had become clear that AI was not living up to the promises of its leading lights to quickly produce a thinking machine. It was Dr Rumelhart (among others like Grossberg) who further investigated and developed NNs, integrating novel reinforcement techniques (e.g. backpropagation) and thus grounding the field of AI more mathematically, or as it was also known then, subsymbolically.

    Since Dr Rumehart's work in the 1980s, subsymbolic AI has risen in importance as symbolic AI has fallen. Today, virtually all of AI research employs engineering techniques governed by increasingly sophisticated mathematical principles, and integrate feedback and learning, just as his work did. While I wouldn't claim Dr Rumelhart to be the father of modern AI, I would point out that the mathematics and machine learning central to his work correctly anticipated the current grounding of AI in both learning and numerical computation that reshaped and resurrected the field just as symbolic AI degenerated into the the "AI Winter" of the 1980s. Today's numerical AI researcher resembles subsymbolicists like Rumelhart significantly more than the renowned founders of AI, symbolicists all.

Evolution is a million line computer program falling into place by accident.