AI Going Nowhere? 742
jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
Text of Article (Score:3, Informative)
By Mark Baard
Story location: http://www.wired.com/news/technology/0,1282,58714, 00.html
02:00 AM May. 13, 2003 PT
Will we ever make machines that are as smart as ourselves?
"AI has been brain-dead since the 1970s," said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy.
Such notions as "water is wet" and "fire is hot" have proved elusive quarry for AI researchers. Minsky accused researchers of giving up on the immense challenge of building a fully autonomous, thinking machine.
"The last 15 years have been a very exciting time for AI," said Stuart Russell, director of the Center for Intelligent Systems at the University of California at Berkeley, and co-author of an AI textbook, Artificial Intelligence: A Modern Approach.
Russell, who described Minsky's comments as "surprising and disappointing," said researchers who study learning, vision, robotics and reasoning have made tremendous progress.
AI systems today detect credit-card fraud by learning from earlier transactions. And computer engineers continue to refine speech recognition systems for PCs and face recognition systems for security applications.
"We're building systems that detect very subtle patterns in huge amounts of data," said Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, and president of the American Association for Artificial Intelligence. "The question is, what is the best research strategy to get (us) from where we are today to an integrated, autonomous intelligent agent?"
Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end, Minsky said. So-called "expert systems," which emulated human expertise within tightly defined subject areas like law and medicine, could match users' queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old.
"For each different kind of problem," said Minsky, "the construction of expert systems had to start all over again, because they didn't accumulate common-sense knowledge."
Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base.
"Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up," reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff.
Even as he acknowledged some progress in AI research, Minsky lamented the state of the lab he founded more than 40 years ago.
"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."
"Marvin may have been leveling his criticism at me," said Rodney Brooks, director of the MIT Artificial Intelligence Lab, who acknowledged that much of the facility's research is robot-centered.
But Brooks, who invented the automatic vacuum cleaner Roomba, says some advancements in computer vision and other promising forms of machine intelligence are being driven by robotics. The MIT AI Lab, for example, is developing Cog.
Engineers hope the robot system can become self-aware as they teach it to sense its own physical actions and see a causal relationship. Cog may be able to "learn" how to do things.
Brooks pointed out that sensor technology has reached a point where it's more sophisticated and
Re:Will we ever have *real* AI? (Score:5, Informative)
One could argue that our brains are just synapses firing. Each one on it's own knows absolutely nothing. However, it's the SYNERGISTIC effects of all the synapses working together that creates our brain, which allows us to reason, etc (Note: This is without religion getting in the way, I'd personally not go there...)
Steve
Re:Will we ever have *real* AI? (Score:2, Informative)
The problem with having "true" intelligence in machines is that we don't know how humans have "true" intelligence. We can't very well replicate something we don't know.
Re:What use is AI without an operating platform (Score:4, Informative)
The Cyc project (Score:5, Informative)
Re:Why do you say AI is going nowhere? (Score:1, Informative)
AIBO focuses on research (Score:2, Informative)
Universities are doing just that in the various RoboCup [robocup.org] events.
Re:Minsky only has himself to blame. (Score:3, Informative)
His take at perceptrons was vaild and well-founded. They indeed suffered from linear separability problem.
Re:Missing Minskey Quote (Score:3, Informative)
By the by, here are a couple of articles that address and expound upon (with bigger 'public' names like Bill Joy) the progress of A.I.
May artificial intelligence remain artificial [asia1.com.sg]
A.I. Can't Yet Follow Film Script [wired.com]
Re:Will we ever have *real* AI? (Score:1, Informative)
I MUST SAY THAT'S A VERY USEFUL COMMENT !
If you had made a *little* lazy research you would have found LOTS of information:
- areas involved on different tasks
- interactions between axons and dentrites ( http://www.sturgeon.ab.ca/rw/nervious_system/Axon
- flows of information ( from vision areas to identification areas )
- etc etc etc
It even possible today to simulate ( roughly ) the auditory organ and to stimulate directly the neurons to give hearing capabilities to the deafs.
George Heilmeier, Texas Instruments, and AI (Score:3, Informative)
From 1983 through 1991, George Heilmeier was the Chief Technical Officer at Texas Instruments. He pushed TI into massive investments in AI R&D. Some of the best technical people I knew at TI thought the AI stuff was a waste of time, but it was being pushed by Heilmeier and the executives. Marvin Minsky was one of the experts brought in as an AI consultant, and he appeared in various TI propaganda. At the time the Japanese were pushing "fifth generation computing" which included AI, so there was a push to compete with the Japanese. TI developed AI hardware and software and tried to force fit it into various applications. They claimed various successes applying AI to industry problems, but eventually is all collapsed into a big waste of time and money. Heilmeier left at the end of the collapse.
Today you can find Heilmeier all over the place on various corporate boards and winning various awards for technical excellence. It is interesting that in most of the the bios that you can find on the web about Heilmeier, you don't find references to how he lead TI down the AI path to a deadend.
Brilliant idea (Score:3, Informative)
Now that makes a great deal of sense. When I was at university, I did all of my VAX work through either a terminal session or, more commonly, an emulator. It would seem to be a very worthwhile grad project to devise a robotic simulator to be used for future research. Naturally, any half-way decent implementation would allow for plug-in modules to simulate different types of robots.
It should also be able to cope with a variety of different scenarios, to focus on what the AI/robotic research in question is aimed at. Are you trying to cope with terrain, such as spider or walking robots? You should be able to simulate grass, soft/wet grass, rocks up to a certain size, hills with specific angles, etc. Pattern recognition? You must be able to simulate the article to be recognized in many complex scenarios -- rotated, in a crowd, light/dark confusion, etc. (I imagine a good gaming terrain engine could provide a good start here.)
There would be lots of possibilities for future students to extend such a simulator by adding new modules, etc., and the AI researchers/students wouldn't waste nearly so much time playing with cogs, but instead could get down to do their real work.
After all, that's the point, right? AI researchers want to work on AI -- even if it isn't as glamorous as, say, walking talking dancing robots. Right? I mean, I know that would be my dream job, to just be able to knuckle down and work on pure AI.
Re:Biology First (Score:1, Informative)
Re:Society of Mind strides (Score:2, Informative)
The thing about "Society of Mind" is that it's very difficult to take literally. Each page is its own concept - there's not a lot of high-level organization to the book. The concepts interrelate, of course, but formalizing and implementing them is tricky.
The book has certainly served as high-level inspiration for quite a lot of people. A couple of examples would be Michael Travers's LiveWorld [mit.edu] and Mark Humphrys's "World-Wide-Mind" project [w2mind.org].
But as far as I know nobody prior to me has really tried to make K-lines, polynemes, pronomes, frames, etc., and hook them all together, as described in "Society of Mind".
HAL the 15 month only computer brain (Score:2, Informative)
Re:Do not be naive (Score:3, Informative)
But what kinds of theories are those you mention? Since you are comparing the computation with the human brain, I assume you are talking about some sort of neural simulation. However all neural simulations I've heard of are some sort of glorified statistical optimizers (MLP, Recurrent networks, etc.). They approximate some sort of function, nothing more. Then there are of course the unsupervised methods, but they pretty much just sort large amounts of data according to some previously known criteria, so they are more a case of intelligence of the author than the computer system.
If I've understood correctly we still know very little about the internal functioning of the human brain on the theoretical level. Not by far enough to reverse engineer any significant parts of it.
forum for this (Score:2, Informative)
That Dell ad at the top of this page IS GOING TO SEND ME INTO EPILEPTIC FITS. I hope they don't mind when I sue.
Re:Do not be naive (Score:3, Informative)
Consider the raw data complexity of the human brain
That's a lame excuse. It would be an accomplishment to create a machine as intelligent as a mosquito, but that hasn't been done yet. Saying that we need to wait 'til computers are as fast as human brains is just hiding from the real problem.