AI Going Nowhere? 742
jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
What about my AIBO? (Score:3, Interesting)
Re:What about my AIBO? (Score:4, Insightful)
to an extent yes it has decent pattern recognition. can it pick you out from the rear? no. side? no.
Can it simulate and fool you into thinking it is showing emotions ? yes. is it anytihng but an expensive toy? no.
the Abio is amazing, but it hardly does what people think it does. and that is the key with the abio.. it does alot of things that fool humans quite well.
it will get better, but it is hardly near AI material.
Re:What's the difference? (Score:4, Insightful)
The same difference between cheating on a test in a subject you've never taken and actually knowing the material: as soon as you're asked a question that isn't on your crib sheet, the charade is over.
An AIBO has a pre-programmed set of behaviors, and any stimulus it isn't programmed to respond to will not have a realistic effect. The same is true of Loebner Prize winner Alice -- the only difference is the size of the pre-programmed response database. Ask it something it's never heard, and it will choke.
When these AIs are able to produce convincing responses to new stimuli, then I'll say that the difference between "fooling" and actually thinking has become irrelevent.
Re:What about my AIBO? (Score:3, Insightful)
Re:What about my AIBO? (Score:5, Insightful)
Re:What about my AIBO? (Score:5, Insightful)
It doesn't work this way, and yes, there is a difference. Having an outward appearance of intelligence is not enough to show intelligence. Read Searle and Block's discussions on the Chinese Room argument -- it's a fascinating and eye opening read (I think it was Block that -- quite convincingly, IMHO -- makes the case that most of our intelligence is innately biological, and "strong AI" not even possible with what we know today).
IMHO one of the problems with AI is that we don't even know what human intelligence is, and until there is a fundamental advance (not technological but in our understanding of our human/biological mind) then it seems to me the most we can hope for are machines that mindlessly ape intelligent behavior, but are not intelligent in any but very superficial ways or by very loose definitions.
Something that mimics the outward appearance of intelligence is a far cry from what, hopefully we'll be capable of in the (probably still distant?) future.
Re:What about my AIBO? (Score:4, Interesting)
What about pattern recognition? How long do parents spend holding up pictures of various animals or various shapes for their children to identify?
When it gets right down to it, every one of us has been significantly programmed by our parents, teachers, and government. I am not arguing against the system, just saying that's how it is. I don't believe AI as anticipated will ever truly exist because the degree of creativity and imagination desired exists only in humans either because of an all-knowing, all-powerful creator or millions of years of mutations.
Re:What about my AIBO? (Score:5, Insightful)
My dog can do the same thing.
Text of Article (Score:3, Informative)
By Mark Baard
Story location: http://www.wired.com/news/technology/0,1282,58714, 00.html
02:00 AM May. 13, 2003 PT
Will we ever make machines that are as smart as ourselves?
"AI has been brain-dead since the 1970s," said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy.
Such notions as "water is wet" and "fire is hot" have proved elusive quarry for AI researchers. Minsky accused researchers of giving up on the immense challenge of building a fully autonomous, thinking machine.
"The last 15 years have been a very exciting time for AI," said Stuart Russell, director of the Center for Intelligent Systems at the University of California at Berkeley, and co-author of an AI textbook, Artificial Intelligence: A Modern Approach.
Russell, who described Minsky's comments as "surprising and disappointing," said researchers who study learning, vision, robotics and reasoning have made tremendous progress.
AI systems today detect credit-card fraud by learning from earlier transactions. And computer engineers continue to refine speech recognition systems for PCs and face recognition systems for security applications.
"We're building systems that detect very subtle patterns in huge amounts of data," said Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, and president of the American Association for Artificial Intelligence. "The question is, what is the best research strategy to get (us) from where we are today to an integrated, autonomous intelligent agent?"
Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end, Minsky said. So-called "expert systems," which emulated human expertise within tightly defined subject areas like law and medicine, could match users' queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old.
"For each different kind of problem," said Minsky, "the construction of expert systems had to start all over again, because they didn't accumulate common-sense knowledge."
Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base.
"Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up," reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff.
Even as he acknowledged some progress in AI research, Minsky lamented the state of the lab he founded more than 40 years ago.
"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."
"Marvin may have been leveling his criticism at me," said Rodney Brooks, director of the MIT Artificial Intelligence Lab, who acknowledged that much of the facility's research is robot-centered.
But Brooks, who invented the automatic vacuum cleaner Roomba, says some advancements in computer vision and other promising forms of machine intelligence are being driven by robotics. The MIT AI Lab, for example, is developing Cog.
Engineers hope the robot system can become self-aware as they teach it to sense its own physical actions and see a causal relationship. Cog may be able to "learn" how to do things.
Brooks pointed out that sensor technology has reached a point where it's more sophisticated and
Sour Grapes (Score:5, Insightful)
Re:Sour Grapes (Score:5, Insightful)
He's right. Theoretical work has ground to a halt in the U.S. Universities have succcumbed to business-oriented research to make money -- although, given all that cash, it's amazing that tuition keeps skyrocketing.
Government is now pretty much owned by corporate people, and they aren't metering out grant money to loing-haired theory about AI. Grant-driven University research is no pretty much a free source of corporate R&D.
Minsky's right: AI as science has died, not because it was impossible to improve the theories, but because it wasn't making any money.
Semi-artificial intelligence (Score:3, Interesting)
The real money will begin to flow once the humankind will stop being scared of direct integrating of humans into computer networks.
I am not sure when, but ultimately all keyboards, mice and screens will take their places in musiums. People will communicate with computers and each other by connecting computers directly to their brain. Thus, the solid knowledge of natural i
What use is AI without an operating platform (Score:5, Insightful)
I understand his frustration in general progress. But, those grad students are building a strong foundation for their later work that may very well meet the goals he is espousing. No need to have design flaws in implementation down the road because the engineer wasn't properly educated in physical design as well as logical design.
-Rusty
Thats a load of rubbish (Score:5, Insightful)
Building any form of AI system is not easy, but copping out of it by building toys is not the answer. We already have platforms for AI; they're called line terminals. Things like pattern matching do not require a fully autonomous robot, after all.
Minsky is right; whats new to come out of actual AI research in the last 30 years?
Re:Thats a load of rubbish (Score:5, Insightful)
If a human (or any animal) were left to grow with no senses and no method of communication (or the most very basic input/output, analogous to your line terminal), what sort of intelligence would develop? Probably nothing very coherent.
BTW, AI is most certainly not pure math.
MOD PARENT UP (Score:5, Insightful)
I am an AI researcher and the parent poster is speaking truthfully.
The main challenges in AI at the moment do not concern building the physical robots -- e.g. a piece of kit on wheels with IR sensors or such things.
The main challenges in AI concern applying some very complicated math to solve problems like pattern recognition, density estimation and other forms of machine learning.
It seems to me that a large number of AI PhD students spend their lives tinkering with the mechanics and electronics of the robots that will ultimately be used to test their algorithms. This is wasted time; a good electronics graduate should be able to do the tinkering, it shouldn't require a prospective AI PhD student to do it.
I can see the point in the PhD student learning a little about the hardware that they want to run their algorithms on (so that they know the limitations and common problems with real hardware), but they should not spend all their time doing that and wasting the opportunity to spend their time contributing to their field (i.e. AI, not mechanics or electronics).
That said, many AI labs do not have the funding to be able to pay full time hardware technicians, so in many cases the PhD student *has* to do the tinkering
Re:MOD PARENT UP (Score:4, Insightful)
In most cases, the hardware and its limitations can be simulated. The only reason that most robotic AI projects are embedded in hardware is because it makes good eye candy for the science press, funders, etc. If you have a good simulation of the environment and the platform, you no longer need to build the hardware for AI research to proceed.
Also, why does one need to build new platforms each time a project ensues? Many robotic components could be reused so that only processor boards, motive actuators, or sensors would need to be updated. The reuse of firmware would cut down on the amount of programming time, as well. I think a good MS thesis would be to develop a kind of common robotic architecture along with a simulation testbed. This would allow the AI researchers to get back to work and only do last minute tinkering.
Brilliant idea (Score:3, Informative)
Now that makes a great deal of sense. When I was at university, I did all of my VAX work through either a terminal session or, more commonly, an emulator. It would s
Re:MOD PARENT UP (Score:4, Interesting)
I get the impression though that many AI researchers see Hofstader as a heretic. That's too bad because I think the ideas he and his team have developed hold more promise than any other approach to AI currently extant.
Re:What use is AI without an operating platform (Score:4, Informative)
Re:What use is AI without an operating platform (Score:4, Interesting)
About Minsky... (Score:5, Insightful)
Re:About Minsky... (Score:5, Interesting)
Oh, say, Rod Brooks, Tomas Lozano-Perez, Hal Abelson, Gerry Sussman, Eric Grimson, Pat Winston, Tom Knight
The difference between Minsky and the rest is precisely as the first poster asserted. Having read Minsky's books, known him professionally and personally, and having taken his course, I must agree that the amount of weight placed on his words are not equal to their value. As others have observed (I forget whom and where), Minksy's original contributions were interesting ramblings at the edge of a new field which happened to pinpoint rich veins of research in some cases, and kill off valuable paths in others (think perceptrons which are, yes, in fact, very useful things, and yes, in fact, do model real neurons reasonably well, and no are not computationally impoverished unless you abide by Minsky and Papert's artifice of only single layers). In otherwords, in some cases, he got lucky, in others he fell flat. This initial success led him to continue pontification (think "Society of Mind", a book of little real contribution), while doing marginally small amounts of actual research. Rod Brooks, in contrast, has made far more, and far deeper, contributions working on his subsumption architecture.
Minsky's course (at the advanced graduate level) consists of students listening to his musings and ramblings which he often repeats through the term, since he has no syllabus, no agenda, and no apparent desire to teach. When he gives talks, they are all extemporaneous; someone like Churchill could pull that off, Minksy's stream-of-consciousness style keeps his acolytes happy, but leaves those with real thirst for knowledge quite parched. Does this not fit the accusation?
So what if Minksy thinks graduate students shouldn't be soldering robots? Does that matter? So what if the current AI field isn't following his pet projects, is he making any contributions himself? We've made tremendous strides in AI over the past decade; they just haven't been where Minksy thinks they should be, despite his questionable over-all track record. Exactly why should anyone care that much?
Why do you say AI is going nowhere? (Score:5, Funny)
of course it's going no where... (Score:5, Funny)
That's OK... (Score:5, Funny)
I don't see NON-ARTIFICIAL intelligence progressing a whole hell of a lot either...
Maybe the problem is Minsky himself? (Score:5, Interesting)
Re:Maybe the problem is Minsky himself? (Score:5, Insightful)
Now Minksy, never wanting to admit his life's work has been a dead end, comes out saying that it's all these other researchers working in other directions that are at fault for there being no progress. I imagine he believes that if only they'd all climb the tree with him, the trip to the moon could really start.
Re:Maybe the problem is Minsky himself? (Score:4, Insightful)
But the real AI comes when you create a 'stupid' system that is able to become smart through learning and training. He feels dissapointed because nobody (or almost nobody) is focusing in this direction.
You have smart toys ala Aibo, and smart systems ala Eliza, and a lot of people is working towards creating smarter toys and smarter systems, but the real breaktrough will come when somebody manages to create the dumbest system possible.
Re:Maybe the problem is Minsky himself? (Score:4, Interesting)
Robots are not a bad thing to work on if other kinds of AI are going to have a chance, because a more holistic kind of AI would recognize that intelligence and cognition first emerged as a function of having a physical body. On the other hand, it's just robotics, it's not AI itself.
Also, AI was good for the hackers who supported its development on computer workstations. Systems like the Lisp Machine still compare very well to current languages and tools.
Re:Maybe the problem is Minsky himself? (Score:4, Interesting)
Two unknowns dont make stuff work (Score:4, Insightful)
I very much enjoy the works of Markram and Tsodyks. What they mainly analyze is how two nerve cells can interact with each other. They showed how they change their connection weights and how the timing of spikes, nerve impulses, affect how neurons connect to each other and how they transmit information.
While these studies tell us a lot about the underlying biology they do not tell us what these modes of information transmission are used for. For years it had been known that synapses have complex nonlinear properties. Biology pretty much does not constrain what functions neurons compute.
Thats why I do not believe that such studies will bring us nearer to real AI anytime soon. The algorithms coming from these systems are severely underconstrained. A lot of modelling has followed the pioneering works of Markram and Tsodyks, one of them being Maas. All these algorithms are very fascinating and might yield insight into the functioning of the nervous system.
The algorithms however are lightyears from being applicable to real world problems. The field of AI is old and in some sense quite mature. None of the "biologically inspired" algorithms today can compete with state of the art machine learning techniques.
Grad students having fun (Score:3, Insightful)
So hire and pay them money so they do real research instead of having fun. Otherwise quit your bitchin'. I personally think building stupid robots is cool.
Well... (Score:3, Interesting)
I'd consider that pretty much intelligent, compared to some people I know. Then again, some people I know can hardly be described as sentient, let alone intelligent.
Intelligence isnt the problem (Score:5, Insightful)
reply to MYCROFTXXX@lunaauthority.com
Re:Intelligence isnt the problem (Score:5, Funny)
Biology First (Score:5, Insightful)
Re:Biology First (Score:5, Insightful)
And artificial intelligence doesn't necessarily have to reflect human intelligence.
disappointing (Score:5, Insightful)
This is the most interesting comment to me. Because we understand the nature of the process that produces supposedly 'intelligent' results, (and we don't understand the same process in ourselves), we perhaps rightly just view the resulting system as just an application.
Seems like Minsky is throwing all his toys out of the pram because he doesn't want to admit to what everyone else has been saying for a while: that whether a computer can think is at best an astonishingly difficult question to answer and at worst meaningless. I'm a grad student who's just spent a year looking at computational linguistics and semantics (amongst other things), and the most debilitating restriction on the semantic side of things is the problem of so-called 'AI-completeness', which essentially says that if you solve this problem you have a, externally at least, thinking computer. Really simple things like anaphora resolution are AI-complete in the general case. If we could have solved this problem by now, I think it's fair to say we would have done, given its massive importance. However, we know that the brute-force case is ridiculously intractable, and we can't figure out how to do it any more cleverly. Roger Penrose argues that this is due to the fundemental Turing-style restrictions that we place on our notion of computing. Until we get a paradigm shift at that level, we're likely never to solve the general case.
And I'm sure that Minsky is aware that attempts to solve constrained domain inference and understanding have been taking place for a good long time now. I just don't see why he's so upset that the field of 'AI' (which is a nebulous catch-all term at best) has shifted its focus to things that we stand a cat in hell's chance of solving, and that have important theoretical and practical applications (viz. machine learning). Replicating human thought is not the be-all and end-all, and you can argue that it's not even that useful a problem.
Robots, though, I agree with. Can't stand the critters
Henry
The Cyc project (Score:5, Informative)
Re:The Cyc project (Score:4, Interesting)
This reminds me of a quote from French mathematician Henry Poincare: "just as houses are made out of stones, so is science made out of facts; and just as a pile of stones is not a house, a collection of facts is not necessarily science."
Applied to the cyc project: a collection of facts is not necessarily intelligence.
Shocking, indeed! (Score:5, Funny)
What? Grad students are doing tedious, repetitive, mindless labor instead of making glamorous, thrilling, world-changing breakthoughs? This is an outrage!
Thank heaven I have a Ph.D. and got to spend this morning in the thrilling activity of drawing blood from 30 angry mice. I'm hoping to have the urine smell washed off before lunch.
Re:Shocking, indeed! (Score:3, Funny)
Now *that* is a great name for a band!
But Robotics Must Precede AI (Score:4, Insightful)
Trying to program intelligence purely with software puts the researcher at a disadvantage, since even the most fundamental rules and attributes of things (fire is hot, water is wet) have to be explicitly entered as constant variables.
Once robotics advances to the point where mobility, vision, and speech recognition can be taken for granted, then AI can be programmed as an add-on.
Body first, mind second - That's how animals evolved on this planet, and it's how, I believe, Rodney Brooks approaches this field.
Re:But Robotics Must Precede AI (Score:3, Insightful)
"Body first, mind second" sounds nice, but without reproduction and a mechanism for evolution you're not doing anything but creating an environment for your AI to interact with - you're not creating the pressures that caused evolution of intelligence in nature. So why go through the trouble of mechanics when you c
What is the purpose of AI? (Score:3, Insightful)
AI today has nothing to do with intelligence. Its all basically rule-based procedural programming. While this allows us to make some really neat applications like automatic vacuum cleaners and pool scrubbers, it has nothing to do with "intelligence".
The human mind is not rule-based -- we impose a framework of rules to allow everyone to live together in relative harmony. The core of our being -- how our mind actually works -- remains an absolute mystery.
Re:What is the purpose of AI? (Score:3, Insightful)
So, when you see a cliff, you blindly walk off it? After all, there are no rules disallowing it, and it's not clear that anyone else's "relative harmony" would be disturbed by such a thing.
Humans have many rules that they create and live by themselves - many having to do with self-preservation, self-actualization, and motivation - that have little to do with you simplistic explanation
Pot, meet Kettle (Score:5, Insightful)
Yeah, much more shocking than the -- decades -- he (and others in the 'hard AI' camp) have been spending? They've made oh-so-much more progress, haven't they?
Rodney Brookes made more progress with his robots in the early nineties than the whole hard AI camp did in 3 decades. I remember seeing a documentary once comparing this huge robot which used a traditional procedural program to navigate through a boulder-strewn field. It took about 3 HOURS to decide where to put its foot next. Meanwhile, little subsumption architecture-based robots were crawling around like ants, in real-time. (Oh, and some of them had to learn to walk from first principles every time they were turned on -- only took about half an hour!) That's the most damning evidence of the failure of hard AI I can think of.
As others here have said, what good is a brain until we get a useful BODY working? Manueverability and acute senses are a must before an artificial intelligence can do anything useful, or learn from its environment effectively.
Great Minsky Quote (Score:3, Insightful)
Minsky only has himself to blame. (Score:4, Insightful)
Promising subfields like perceptrons were intentionally quashed by him... he went out of his way to strangle investment and research in areas he considered to be a dead end. We're not literature majors: we can't just all say the same thing in a party over wine and cheese and call it progress.
Even bad ideas, when well explored, can give new meaning and better approaches to a field. And since this is research, noone knows the correct answer: even a dumb-seeming idea may turn out to be the right one-- or give us clues about features the right answer needs to have.
Of course we've had major advances in AI. One of the challenges of AI, as the article points out, is that once something is well understood, it is defined as being outside the AI field. Computer vision, face recognition, voice and speech recognition. Conversation engines like SmarterChild. No, this isn't HAL, but they are good, positive steps in the right direction.
Re:Minsky only has himself to blame. (Score:3, Informative)
His take at perceptrons was vaild and well-founded. They indeed suffered from linear separability problem.
Re:Minsky only has himself to blame. (Score:3, Insightful)
The original poster probably meant neural networks more than perceptrons. By the time his paper was published (1969) neural networks were far more advanced than a simple perceptron, and had easily overcomed the linear separation problem. Some people claim he along with Papert engineered this paper to get a juicy DARPA grant that was just about to be assigned. His paper efectively killed research in this area for almost 20 years.
Minsky has always been a bit of a weasel and knows very well how to pull the st
Re:Minsky only has himself to blame. (Score:3, Interesting)
The speech recognition community also investigated techniques using neural networks, although they did not produce a clear vin o
Old guard moving out (Score:5, Interesting)
He comes across as affable but bitter. I found it strange that though he cointually complains about the leadership of the AI lab, he and his protege Winston were in control of it for some ~30 years without making any groundbreaking progress. In fact, Minsky's latest work "The Emotion Engine" is simply a retread of his decades-old "Society of Mind." I suspect that now that Brooks and the new guard are moving in, the old guard is looking for someone blame its lack of results on.
Minsky + Brooks (Score:4, Interesting)
"AI has been brain-dead since the 1970s."
I agree, unfortunately. At least, what was traditionally meant by "AI" has been brain-dead. There is very little focus in the field today on human-like intelligence per se. There is a lot of great work being done that has immediate, practcal uses. But whether much of it is helping us toward the original long-term goal is more questionable. Most researchers long ago simply decided that "real AI" was too hard, and started doing work they could get funded. I would say that "AI" has been effectively redefined over the past 20 years.
"The worst fad has been these stupid little robots."
Minsky's attitude towards the direction the MIT AI lab has taken (Rod Brooks's robots) is well-known. And I agree that spending years soldering robots together can certainly take time away from AI research. But personally, I find a lot of great ideas in Rod's work, and I've used these ideas as well as Marvin's in my own work. Most importantly, unlike most of the rest of the AI world, Rod *is*, in the long run, shooting toward human-level AI.
Curiously, just last month I gave a talk at MIT, tited "Putting Minsky and Brooks Together". (Rod attended, but unfortunately Marvin couldn't make it.) The talk slides are at
http://www.swiss.ai.mit.edu/~bob/dangerous.pdf [mit.edu].
In particular, I shoot down some common misperceptions about Minsky, including that he is focused solely on logical, symbolic AI. Anyone who has read "The Society of Mind" will realize what great strides Minsky-style AI has made since the early days. I also show what seem like some surprising connections to Brooks's work.
- Bob Hearn
Minsky is dangerous (Score:5, Insightful)
Before you yell flamebait or troll, let me explain.
I have been following the progress of various AI technologies, including neural nets and adaptive logic networks, for many many years now. Years ago perceptions were first developed and it was shown that they could learn simple patterns. Perceptrons were basically two layers of software simulated neurons. They worked, and researchers were fascinated and worked on them regularly.
Minsky, being the "highly regarded" and "leader" in AI, wrote a paper that proved that these perceptrons could never learn more complicated patterns, and threw a bunch of math at the reader. So people stopped. After all, there was a mathematical proof that perceptrons weren't going anywhere. Research skidded to a halt for decades because of Minsky.
Of course, then someone developed the (gasp!) THREE layer perceptron/neural net and sure enough with the right formula it could learn much more complicated tasks.
Minsky, in my opinion, does this regularly. The problem is, that he has a reputation in the industry as being a leader (I'm not sure why).
He's already lost us two or three decades of research because of his "leadership" -- I am terrified that he might cost us more development into the future.
Where could we have been if Minsky wasn't always going around half cocked, screaming that he is right? "Robots are useless!" is history repeating itself and him trying to get more press. Keep developing guys, just ignore the peanut gallery. There's always someone who says it can't work (ahem, Minsky) -- it can and it will.
Critical? (Score:5, Insightful)
People just accept it, and progress is delayed.
Why is it his fault that there are so many followers? If anything is to be blamed is that these researchers just blindly follow whatever he's saying rather then take a good critical look at what is going on.
If his math and theory "proved" that an area of AI was a dead end, and it wasn't, his math/theory was wrong. It is a sad state when nobody dare challenge the status quo.
I wonder what... (Score:3, Insightful)
About 20 years ago AI was going down the craperroo until folks like Brooks decided that the AI field would be better served by moving it from the more theoretical GOFAI method to a more applicable style. Revitalized everything.
Re:I wonder what... (Score:3, Interesting)
Rodney Brooks (who's The Man) said something like "a [working] robot is worth a thousand papers." Instead of a top-down view, subsumption architechture robots have a tight connection to sensing and action, but often no memory. One such robot was able to search out, find and grab empty coke cans, then take them to the trash!
(semiquote from Steven Levy's "Artificial Li
Artificial Intelligence Is Magic (Score:3, Interesting)
Now, with that in mind, let's look at artificial intelligence. AI has always been about trying to convince an audience that a machine is thinking. This is demonstrated by the very existence of the Turring test and many products (such as the Aibo, Furby, etc) that try to mimick emotions. If the audence is entertained, amused, or convinced, the AI is considered good. Bad AI is when the audience can see right through it.
Artificial intelligence is magic. It's a trick. It's an illusion.
It is no surprise then that AI hasn't really advanced. The trades of showmen are practically unchanged for hundreds of years. Razzle-dazzling an audience involves technological advances, but it remains unchanged. Even in the cases where "artificial intelligence" is used to aid in medical diagnosis ("expert systems") or manufacturing are really only following man-made logical structures. The computers aren't thinking, they're only doing what they're told to do, even if indirectly. The end result is impressed people who think the machine is smart.
Of course, you don't have to take my word for it. If you want to see how badly AI is going nowhere, I hightly recommend reading The Cult of Information by Theodore Roszak [lutterworth.com]. While his focus is not on the fallicy of AI, it covers it in context with the much broader disillusionment of computers by society.
Now, what does AI need in order to progress? Probably AI creating other AI. Something with a deeper embodiment of evolution. As long as it's man-made, it will never be intelligent, just following a routine. Of course, I am going to stop right here... I am not qualified to offer a solution these obstacles.
Missing Minskey Quote (Score:4, Funny)
"How the hell am I ever going to be able to download my brain into these damn little robots if they don't hurry up and make them smarter? I running out of time, dammit!!!"
Re:Missing Minskey Quote (Score:3, Informative)
By the by, here are a couple of articles that address and expound upon (with bigger 'public' names like Bill Joy) the progress of A.I.
May artificial intelligence remain artificial [asia1.com.sg]
A.I. Can't Yet Follow Film Script [wired.com]
The reason AI-ism (Score:3, Funny)
People justify their robophobia by pointing to these fictional examples, but if recent murder statistics are to be believed, the score is a bit lopsided.
This kind of prejudicial attitude must end.
Smarter than Junk Mail Senders (Score:3, Funny)
Hey, that thing is already smarter than the companies that continue to send junk mail to my grandfather who has been dead for 22 years, now. Maybe they should get that software to manage their mailing list?
Don't build robots, simulate them (Score:5, Interesting)
Human Level AI's Killer Application - Interactive Computer Games, John E Laird and Michael van Lent American Association for Artificial Intelligence AI Magazine Summer 2001 pp 15-25
My summary of the above - the AI in games might not be too hot (some would dispute with the academics about that but let it go), but game environments themselves are complex enough to pose a challenge for state-of-the-art AI researchers.
I went to a "BOOM" conference at Cornell... (Score:5, Interesting)
So I found myself standing in front of a computer screen. It was a worm swimming through water! In 3D! In real time! After I pushed my jaw shut, I began to ask the genius student some questions...
"Is that real-time?" "Well, actually, no, that is a 10 second looping clip that took a week to calculate."
"Well, I see a neural map there. Is that complete?" "Well, actually, no, that is a simplified version of the real nematode nervous system, on the order of about 1 simulated neuron to 10 actual neurons."
"So you simulate neurons! That's awesome. Let's see the code." (He proceeds to flip through 4-5 pages of very sophisticated-looking mathematical equations to describe the behavior of ONE neuron.)
What a let-down! No wonder Minsky is pissed, real AI is HARD!
Software is behind, not hardware (Score:5, Interesting)
Re:Software is behind, not hardware (Score:3, Insightful)
Minsky is in his own little world (Score:3, Insightful)
I suspect part of Minsky's problem is frustration that his ideas about AI aren't bearing fruit, so he's going to take it out on other people's different approaches. It's not much different from the Perceptrons paper he co-wrote in the late 60's that nearly killed Neural Network research for most of the next decade. Never mind that there was plenty of useful neural network research to be done that avoided the failings of the perceptron model.
In my opinion, if we had the wholistic understanding of intelligence that would let us use Minsky's type of approach to AI, there wouldn't be anything left to do but implementation! One cannot just a priori assume all principles of intelligence by self-examination, and that's where he fails. There are interesting things to learn in that approach, but a large number of them have already been learned, so people are turning to other means (bottom up approaches focussing on self organization are doing well and leading to new discoveries) to get a broader understanding of what is involved in intelligence.
Just because Minsky has sour grapes doesn't mean that the robot people aren't doing useful research.
AI winter II ? (Score:5, Insightful)
"AI winter" [216.239.51.104] is the name given to the collapse of strong AI as a business model in the mid 80's - expert systems and symbolic AI in general didn't deliver on their promises, and so the money went away. As a guy who got his doctorate in AI in 1985, I can tell you all about it.
One of the major causes of AI winter was researcher hubris - lots of people hacked up systems that appeared to solve 80 percent of certain complex problems and then said "all that stands between us and a complete solution is money and time". For many of those systems, solving the last 20 percent would have taken 2000 percent of the time, if it could have been done at all. The tragedy of AI winter, though, is that basically all of symbolic AI was abandoned, though some of it is creeping back out into the light with obfuscated syntax (see my
What Minsky sees here is a lot of people heading down the same path, but with neural nets and small robots instead of expert systems. The new systems are doing some interesting things relative to the old symbolic AI systems (though they do have the advantage of 20 years of Moore's law to help them). But, will they scale up? Right now, nobody knows. If they don't, the last thing the field needs is another cycle of overpromise/underdeliver/abandon.
Maybe AI is just plain hard, and cracking it will take longer than one or two computer industry business cycles.
i thought it was about al gore (Score:5, Funny)
well duh!
what idiot made the lowercase L and uppercase I look the same?
but, since we're on the subject, did you know Al Gore invented the field of AI?
Marvin Minsky is an idiot (Score:4, Insightful)
Minksy belongs to the old school of AI thinking. These guys believe that it is possible to make statements about intelligence itself, without considering the interactions of the organism/agent with its environment or the underlying architecture of the brain/CPU. I think that the total failure of this style of thinking to produce anything interesting in 50 years proves that this approach is sterile. Minsky laments the fact that graduate students build robots, but this activity exposes students to the challenges of constructing a device that must actually interact with the environment. It is ridiculous to assume that you could design a system capable of intelligent behavior without ever confronting the problems of sensors and actuators. Almost every part of the brain is devoted to processing raw sensory input or generating motor output. One cannot simply design an intelligent system without worrying about sensory input and behavioral output. The CYC project of Lenat has the laudable goal of teaching a machine "common sense" by hard coding a vast database of simple statements like "Trees cannot walk". This is a totally wrongheaded approach to learning and reasoning, and is typical of old school, hard AI.
We will only make progress in engineering intelligent, adaptive systems by studying actual examples of intelligent, adaptive systems, namely animals. Neuroscientists and psychologists are beginning to embrace the tools of mathematical modeling and simulation wo help explain nervous system structure and function. Computer scientists would do well to similarly embrace the work of experimental neuroscience.
Minsky is a dinosaur.
outside the box and inside the box (Score:3, Interesting)
Imagine what kind of thing you would be without vision, touch, smell, hearing or the ability to move and change your environment. Without these forms of interaction where would human intelligence be?
Seems that a Budhist philosphical approach is most helpful here, ie we are our parts, not more and not less. We are what we are. If you wish to create something that is like a human, you should take an inventory of our parts figure out how they fit together and try to find analogous electronics, software and hardware.
Which is precisely what a lot of the robot folks have started doing. Except that most have started a bit smaller and have modeled insects instead. Finding that they can model seemingly complex insect behavior with simple algorithms and machines.
Although, perhaps the next best step isn't building real robots at all, which can be expensive, error prone and time consuming, but building virtual robots that can be placed in virtual environments of our invention, somewhat like a "Matrix" virtual reality with intelligent agents that can learn. This approach is more computer intensive, since the environment as well as the agent would require large amounts of computing resources, also, the agent would have to perceive the "environment"
Seems that many more forms of human nature could be investigated in this way.
George Heilmeier, Texas Instruments, and AI (Score:3, Informative)
From 1983 through 1991, George Heilmeier was the Chief Technical Officer at Texas Instruments. He pushed TI into massive investments in AI R&D. Some of the best technical people I knew at TI thought the AI stuff was a waste of time, but it was being pushed by Heilmeier and the executives. Marvin Minsky was one of the experts brought in as an AI consultant, and he appeared in various TI propaganda. At the time the Japanese were pushing "fifth generation computing" which included AI, so there was a push to compete with the Japanese. TI developed AI hardware and software and tried to force fit it into various applications. They claimed various successes applying AI to industry problems, but eventually is all collapsed into a big waste of time and money. Heilmeier left at the end of the collapse.
Today you can find Heilmeier all over the place on various corporate boards and winning various awards for technical excellence. It is interesting that in most of the the bios that you can find on the web about Heilmeier, you don't find references to how he lead TI down the AI path to a deadend.
Marvin Minsky has no clue (Score:3, Insightful)
Would a computer think that you are intelligent? (Score:3, Interesting)
If you were put inside a little white box where you had to flip millions of switches on and off according to certain simple rules, you would look like an idiot next to a computer. A computer can't walk around and recognize things, and doesn't know what an apple is, so what? In my opinion, machine intelligence should be focused on making computers able to make themselves better at what they do best. I'm not sure what a super intelligent computer system would be used for, and I don't think that I would even be able to imagine what would be possible. I would be interested to know what other people think about this idea. Most of the things that I can think of tie back into the "real" world somehow. What would a self-organizing non 3-dimension oriented intelligence be able to do?
Saying that AI is impossible because computers can't come into "our world" of three dimensions, or understand our literature is kind of intelligence chauvinism.
Maybe they'd have made more progress if... (Score:4, Funny)
Don't listen to 'ol Minsky (Score:3, Interesting)
As an AI researcher and someone who's read Minsky's books and listened to him talk - I can say that he doesn't know what he's talking about. He was big in his time, but things have moved on and he hasn't. He is an old, pesimistic, armchair AI 'researcher' who still thinks AI is easy. He doesn't understand why AI needs to be embodied and situated.
Having said that, I do agree that AI is almost going nowhere (anyone can see that). But I don't believe Minsky understands why.
Those 'stupid little robots' are the best thing to happen to AI - unfortunately most AI 'researchers' don't really understand what they're doing. Consequently, 97% of the time and effort purported being spend on AI research, isn't.
With a few exceptions, the main reason for the 'advances' we're seeing in AI/robotics now, is that algorithms are riding the wave of advances in computing power.
My guess is that you'll see most of the advances in AI coming as more and more 'real scientists' from other disciplines - such as ethology, biology and neurology - get involved in it.
Keep in mind that this is my opinion - shared by an increasing number of people in the field, but still a small minority.
Classic Minsky vs. Brooks (Score:3, Insightful)
This is a classic battle between Minsky and Brooks. Heck, we had the same battle in our labs (not MIT). I believe that the Brooks response is along the lines of "sure you'll take an extra year to graduate with me, but you'll have one hell of a demo tape." I agree with Brooks. I still show people videos of one of my robots years later. I've never shown anyone any of my simulated robot work afterwards.
Re:Will we ever have *real* AI? (Score:5, Informative)
One could argue that our brains are just synapses firing. Each one on it's own knows absolutely nothing. However, it's the SYNERGISTIC effects of all the synapses working together that creates our brain, which allows us to reason, etc (Note: This is without religion getting in the way, I'd personally not go there...)
Steve
Re:Will we ever have *real* AI? (Score:5, Interesting)
Re:Will we ever have *real* AI? (Score:5, Funny)
My parents had no problem producing a brain; four in fact. Maybe I'll create some myself some day. I could tell you how but I'd need a signed note from your mom
Creating brains isn't hard; creating artificial brains is.
Re:Will we ever have *real* AI? (Score:4, Funny)
Re:Will we ever have *real* AI? (Score:3, Funny)
Nah... You read Slashdot!
Re:Will we ever have *real* AI? (Score:3, Insightful)
Re:Will we ever have *real* AI? (Score:4, Insightful)
That's what Minsky is getting at. Few people are working on that problem.
Research talent in universities seem to be striving for business solutions. But IMO, such research should be primarily done by businesses, not AI labs. Universities should create new science.
Check out computational neurobiology (Score:4, Interesting)
Re:Will we ever have *real* AI? (Score:5, Interesting)
Define "real, true intelligence"
> You can try to simulate that, but so far
> simulation consists of what amounts to a
> gazillion 'if' tests
That's what tradiditonal AI school is doing. Yes, you are correct. It won't go anywhere. On the other hand spiking neural networks are very promising. Search google for "liquid state machine". These researches are making progress novadays, not Minsky.
Re:Will we ever have *real* AI? (Score:3, Interesting)
The progress of AI is uncertain, but
Not so sure (Score:4, Interesting)
Dreyfus argument is old, and its rebuttals are well-known. Consider that symbolic systems are not limited to context-free predicate logic.
The progress of AI is uncertain, but it is certain that there's no future for symbolic logic AI.
It is not certain for me.
Both connectionist and symbolic approaches may succeed if given enough time. However, I think that obsession with neural nets of many people here is of the same nature that obsession of numerous early aviation enthusiasts with wind-flipping devices. Certainly you can mimic mechanics of nature with some effort, but there are usually better ways to do the job.
Re:Will we ever have *real* AI? (Score:5, Insightful)
* One might claim LTP or LTD as some sort of neuronal knowledge. Ok, that's fair, but my point stands if you apply it to the building blocks of neurons. Do ion channels "know"? Do amino acids? It's turtles all the way down.
Re:Will we ever have *real* AI? (Score:3, Interesting)
We act based on external stimuli and based on what we have learned as far as I know.
Unfortunately, we will never fully understand how we are "made" and how we "work".
And without being able to fully introspect ourselves, we will never be able to build a computer which works exactly like a human.
How could you possibly create something to be a replica of something you don't understand?
Cognitive science has made immense progress, but it is still all models and theory.
And as huma
Re:Hrmm (Score:3, Interesting)
It drives me crazy that people are so concerned about possible technologies, that they want to "slow down and think about the consequences of xxx".
This is really just unfounded fear. While we still don't know if something is possible, is not the time to worry about what problems we can concieve that it will bring. Knowledge is more important than worrying about some issues that may or may not arise if we are able to do something. It is good to ask "If we cause this atom to split, will it kill us?", but
Don't turn slaves into humans! (Score:3, Interesting)
Re:AI is going wherever it wants (Score:3, Funny)
The 'A' stands for artificial! (Score:4, Funny)
You can't get that much out of most humans!
Re:AL Going Nowhere? (Score:3, Funny)
Re:AI...heh (Score:3, Insightful)
1. consciousness is just an abstraction (like that actually means anything)
2. prove you have a soul (irrelevent)
3. you're just a machine, you dolt! (doesn't explain why he's aware of himself, irrelevent)
4. more along the same vein
Now, I'm not discounting those arguments, just pointing out that they are completely uninformed. What I mean by that is, no one knows anything about souls, consciousness, etc. We understand that our brains are extremely complex information processi
When AI... (Score:3, Interesting)
Such a model is years off, though, AFAIK.
Re:When AI... (Score:3, Insightful)
This is a very difficult and interesting problem. I do not mean to diminish the hard work of any researcher in this area.
However, when (not if) this is understood, I think we will be treating it as a brute force solution. We will not likely be replacing pocket calculators with emulated brains to help us do long division. A computer chess program will probably still beat an emulated brain. The human brain is well adapted to its environmen