Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

AI Going Nowhere? 742

jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
This discussion has been archived. No new comments can be posted.

AI Going Nowhere?

Comments Filter:
  • What about my AIBO? (Score:3, Interesting)

    by khalua ( 468456 ) on Tuesday May 13, 2003 @10:56AM (#5944786) Homepage
    It can pick me out in a crowd, and it can show a number of emotions, such as surprise, anger, and boredom.... yawn.
    • by Lumpy ( 12016 ) on Tuesday May 13, 2003 @11:17AM (#5945001) Homepage
      It can pick me out in a crowd, and it can show a number of emotions, such as surprise, anger, and boredom.... yawn.


      to an extent yes it has decent pattern recognition. can it pick you out from the rear? no. side? no.

      Can it simulate and fool you into thinking it is showing emotions ? yes. is it anytihng but an expensive toy? no.

      the Abio is amazing, but it hardly does what people think it does. and that is the key with the abio.. it does alot of things that fool humans quite well.

      it will get better, but it is hardly near AI material.
    • by MCZapf ( 218870 ) *
      It was still pretty much explicitly programmed to do those things. It's not really bored, it was just programmed to act bored, etc. Even the image recognition is a testament to the intelligence of the programmers, but not really to the AIBO itself.
      • by Tim Macinta ( 1052 ) <twm@alum.mit.edu> on Tuesday May 13, 2003 @12:05PM (#5945538) Homepage
        It was still pretty much explicitly programmed to do those things. It's not really bored, it was just programmed to act bored, etc. Even the image recognition is a testament to the intelligence of the programmers, but not really to the AIBO itself.
        How do you distinguish between something that just acts bored and something with really is bored? An argument could be made that when a real dog seems bored it is merely acting that way because it is programmed (via its instincts) to act bored. How is that different from the AIBO, apart from the fact that some people know exactly how an AIBO works? A real dog is merely a chemical computer anyway, so in essence, the emotions that you perceive in it are programmed as well, we just don't have the documented source code to work with.
        • by Quino ( 613400 ) on Tuesday May 13, 2003 @02:42PM (#5947435)
          You're making the same leap of faith that most people make (and I now think is incorrect): The human (or Dog) mind is a biological computer therefore an IBM with the right software is also a mind in the same sense.

          It doesn't work this way, and yes, there is a difference. Having an outward appearance of intelligence is not enough to show intelligence. Read Searle and Block's discussions on the Chinese Room argument -- it's a fascinating and eye opening read (I think it was Block that -- quite convincingly, IMHO -- makes the case that most of our intelligence is innately biological, and "strong AI" not even possible with what we know today).

          IMHO one of the problems with AI is that we don't even know what human intelligence is, and until there is a fundamental advance (not technological but in our understanding of our human/biological mind) then it seems to me the most we can hope for are machines that mindlessly ape intelligent behavior, but are not intelligent in any but very superficial ways or by very loose definitions.

          Something that mimics the outward appearance of intelligence is a far cry from what, hopefully we'll be capable of in the (probably still distant?) future.
      • by trg83 ( 555416 ) on Tuesday May 13, 2003 @12:05PM (#5945540)
        This may get a little too philosophical, but I'm going to give it a try. What is the key difference between programming and parenting? You explicitly tell your child what they are and are not allowed to do (sometimes they malfunction/misbehave) and do it anyway. You tell them what emotions are appropriate at certain times. At grandma's funeral it is not appropriate to giggle and laugh. It is also not appropriate to look bored, as we are showing respect for the dead. After going through all the disallowed emotions, that leaves a solemn look and maybe some tears.

        What about pattern recognition? How long do parents spend holding up pictures of various animals or various shapes for their children to identify?

        When it gets right down to it, every one of us has been significantly programmed by our parents, teachers, and government. I am not arguing against the system, just saying that's how it is. I don't believe AI as anticipated will ever truly exist because the degree of creativity and imagination desired exists only in humans either because of an all-knowing, all-powerful creator or millions of years of mutations.
    • by Mad Man ( 166674 ) on Tuesday May 13, 2003 @12:10PM (#5945613)
      It can pick me out in a crowd, and it can show a number of emotions, such as surprise, anger, and boredom.... yawn.

      My dog can do the same thing.

  • Text of Article (Score:3, Informative)

    by Anonymous Coward on Tuesday May 13, 2003 @10:57AM (#5944794)
    AI Founder Blasts Modern Research
    By Mark Baard

    Story location: http://www.wired.com/news/technology/0,1282,58714, 00.html

    02:00 AM May. 13, 2003 PT

    Will we ever make machines that are as smart as ourselves?

    "AI has been brain-dead since the 1970s," said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy.

    Such notions as "water is wet" and "fire is hot" have proved elusive quarry for AI researchers. Minsky accused researchers of giving up on the immense challenge of building a fully autonomous, thinking machine.

    "The last 15 years have been a very exciting time for AI," said Stuart Russell, director of the Center for Intelligent Systems at the University of California at Berkeley, and co-author of an AI textbook, Artificial Intelligence: A Modern Approach.

    Russell, who described Minsky's comments as "surprising and disappointing," said researchers who study learning, vision, robotics and reasoning have made tremendous progress.

    AI systems today detect credit-card fraud by learning from earlier transactions. And computer engineers continue to refine speech recognition systems for PCs and face recognition systems for security applications.

    "We're building systems that detect very subtle patterns in huge amounts of data," said Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, and president of the American Association for Artificial Intelligence. "The question is, what is the best research strategy to get (us) from where we are today to an integrated, autonomous intelligent agent?"

    Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end, Minsky said. So-called "expert systems," which emulated human expertise within tightly defined subject areas like law and medicine, could match users' queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old.

    "For each different kind of problem," said Minsky, "the construction of expert systems had to start all over again, because they didn't accumulate common-sense knowledge."

    Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base.

    "Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up," reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff.

    Even as he acknowledged some progress in AI research, Minsky lamented the state of the lab he founded more than 40 years ago.

    "The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."

    "Marvin may have been leveling his criticism at me," said Rodney Brooks, director of the MIT Artificial Intelligence Lab, who acknowledged that much of the facility's research is robot-centered.

    But Brooks, who invented the automatic vacuum cleaner Roomba, says some advancements in computer vision and other promising forms of machine intelligence are being driven by robotics. The MIT AI Lab, for example, is developing Cog.

    Engineers hope the robot system can become self-aware as they teach it to sense its own physical actions and see a causal relationship. Cog may be able to "learn" how to do things.

    Brooks pointed out that sensor technology has reached a point where it's more sophisticated and
    • Sour Grapes (Score:5, Insightful)

      by JonTurner ( 178845 ) on Tuesday May 13, 2003 @11:17AM (#5944999) Journal
      That's all there is to it. Minski is just sore that his theories from 30 years ago aren't proving themselves, and the decentralized models being implemented by his rivals at MIT (e.g. Rod Brooks and his graduate students) are demonstrating remarkably sophisticated behaviors and advancing the state of the art.
      • Re:Sour Grapes (Score:5, Insightful)

        by Catbeller ( 118204 ) on Tuesday May 13, 2003 @12:11PM (#5945629) Homepage
        Minsky isn't angry that his AI "theories" aren't panning out. He's angry that AI researchers aren't making an attempt to think upnew AI theories that aren't glorified vaccuum cleaners.

        He's right. Theoretical work has ground to a halt in the U.S. Universities have succcumbed to business-oriented research to make money -- although, given all that cash, it's amazing that tuition keeps skyrocketing.

        Government is now pretty much owned by corporate people, and they aren't metering out grant money to loing-haired theory about AI. Grant-driven University research is no pretty much a free source of corporate R&D.

        Minsky's right: AI as science has died, not because it was impossible to improve the theories, but because it wasn't making any money.
        • Minsky's right: AI as science has died, not because it was impossible to improve the theories, but because it wasn't making any money.

          The real money will begin to flow once the humankind will stop being scared of direct integrating of humans into computer networks.

          I am not sure when, but ultimately all keyboards, mice and screens will take their places in musiums. People will communicate with computers and each other by connecting computers directly to their brain. Thus, the solid knowledge of natural i

  • by Ruzty ( 46204 ) <rustyNO@SPAMmraz.org> on Tuesday May 13, 2003 @10:58AM (#5944815) Journal
    He has a complete disregard for the question of where the AI engine will run. If an AI is to be of any more use than a curiousity then those "little autonomous robots" must function in a viable manner so that the AI has something to do when it comes to "life".

    I understand his frustration in general progress. But, those grad students are building a strong foundation for their later work that may very well meet the goals he is espousing. No need to have design flaws in implementation down the road because the engineer wasn't properly educated in physical design as well as logical design.

    -Rusty
    • by Anonymous Coward on Tuesday May 13, 2003 @11:10AM (#5944932)
      Come off it, stop making escuses. Robitics and AI are two entirely different branchs. AI is almost pure math and computing, and robotics is an engineering discipline. Why are AI researching building little robots?

      Building any form of AI system is not easy, but copping out of it by building toys is not the answer. We already have platforms for AI; they're called line terminals. Things like pattern matching do not require a fully autonomous robot, after all.

      Minsky is right; whats new to come out of actual AI research in the last 30 years?
      • by Des Herriott ( 6508 ) on Tuesday May 13, 2003 @11:19AM (#5945026)
        "True AI" might require the sort of interaction with the environment that we're used to through our own senses, in which case building a robotic shell for an artificially intelligent entity could be a necessity.

        If a human (or any animal) were left to grow with no senses and no method of communication (or the most very basic input/output, analogous to your line terminal), what sort of intelligence would develop? Probably nothing very coherent.

        BTW, AI is most certainly not pure math.
      • MOD PARENT UP (Score:5, Insightful)

        by CompVisGuy ( 587118 ) on Tuesday May 13, 2003 @11:29AM (#5945138)
        Please mod the parent up.

        I am an AI researcher and the parent poster is speaking truthfully.

        The main challenges in AI at the moment do not concern building the physical robots -- e.g. a piece of kit on wheels with IR sensors or such things.

        The main challenges in AI concern applying some very complicated math to solve problems like pattern recognition, density estimation and other forms of machine learning.

        It seems to me that a large number of AI PhD students spend their lives tinkering with the mechanics and electronics of the robots that will ultimately be used to test their algorithms. This is wasted time; a good electronics graduate should be able to do the tinkering, it shouldn't require a prospective AI PhD student to do it.

        I can see the point in the PhD student learning a little about the hardware that they want to run their algorithms on (so that they know the limitations and common problems with real hardware), but they should not spend all their time doing that and wasting the opportunity to spend their time contributing to their field (i.e. AI, not mechanics or electronics).

        That said, many AI labs do not have the funding to be able to pay full time hardware technicians, so in many cases the PhD student *has* to do the tinkering :-(
        • Re:MOD PARENT UP (Score:4, Insightful)

          by frank_adrian314159 ( 469671 ) on Tuesday May 13, 2003 @12:24PM (#5945786) Homepage
          so in many cases the PhD student *has* to do the tinkering

          In most cases, the hardware and its limitations can be simulated. The only reason that most robotic AI projects are embedded in hardware is because it makes good eye candy for the science press, funders, etc. If you have a good simulation of the environment and the platform, you no longer need to build the hardware for AI research to proceed.

          Also, why does one need to build new platforms each time a project ensues? Many robotic components could be reused so that only processor boards, motive actuators, or sensors would need to be updated. The reuse of firmware would cut down on the amount of programming time, as well. I think a good MS thesis would be to develop a kind of common robotic architecture along with a simulation testbed. This would allow the AI researchers to get back to work and only do last minute tinkering.

          • Brilliant idea (Score:3, Informative)

            by kiwimate ( 458274 )
            In most cases, the hardware and its limitations can be simulated. The only reason that most robotic AI projects are embedded in hardware is because it makes good eye candy for the science press, funders, etc. If you have a good simulation of the environment and the platform, you no longer need to build the hardware for AI research to proceed.

            Now that makes a great deal of sense. When I was at university, I did all of my VAX work through either a terminal session or, more commonly, an emulator. It would s
    • I think the point is that the engineering problems have all been solved by someone already - or, at least, that there has been some progress towards solving them, while the AI science has (allegedly) been in a stall for some time. So the students are working their asses off solving problems that have already been solved, time which would better be used in solving problems to which no one has an answer yet.
      • by PhilHibbs ( 4537 ) <snarks@gmail.com> on Tuesday May 13, 2003 @11:22AM (#5945060) Journal
        What they need, then, is for an engineering student to do their masters dissertation on creating a generic physical framework for AI systems, or a computing student to do theirs on a generic simulation environment for virtual AI 'bots. Then this can be re-used by AI students in subsequent years. Alternatively, each year they could team up engineering students working on the physical robots with computing studetns working on the AI systems, that way both departments are working on their core speciality.
  • About Minsky... (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 13, 2003 @10:59AM (#5944817)
    Yes, the man is quite brilliant, and possibly the most important voice in the field. That being said, he's also a self-important jerk. Intelligent Systems (what people in the field call AI) aren't where *he* thinks they should be, and he regularly complains about it.
  • by SpanishInquisition ( 127269 ) on Tuesday May 13, 2003 @10:59AM (#5944822) Homepage Journal
    Does it bothers you that AI is going nowhere?
  • by gricholson75 ( 563000 ) * on Tuesday May 13, 2003 @10:59AM (#5944823) Homepage
    .. it was a really bad movie.
  • by Anonymous Coward on Tuesday May 13, 2003 @11:00AM (#5944828)

    I don't see NON-ARTIFICIAL intelligence progressing a whole hell of a lot either...
  • by jdoeii ( 468503 ) on Tuesday May 13, 2003 @11:00AM (#5944833)
    It's not the AI which is going nowhere. It's the traditional approaches to AI such as Minsky's symbolic logic which are not going anywhere. Seach google for Henry Markram, Maass, Tsodyks. Their research seems very promising.
    • by sohp ( 22984 ) <.moc.oi. .ta. .notwens.> on Tuesday May 13, 2003 @11:11AM (#5944937) Homepage
      So true. Minsky's Good Old-Fashioned AI (GOFAI) has been a dead field since the 70s, after they figured out that getting a computer that could move blocks around in an idealized simple world was not a small first step, and Eliza showed them how easy it was to conflate intelligent with clever. The "successes" they had towards AI were, as one author has written, like climbing to the top of a tall tree and claiming you've made progress getting to the moon.

      Now Minksy, never wanting to admit his life's work has been a dead end, comes out saying that it's all these other researchers working in other directions that are at fault for there being no progress. I imagine he believes that if only they'd all climb the tree with him, the trip to the moon could really start.
      • by bigjocker ( 113512 ) * on Tuesday May 13, 2003 @12:12PM (#5945649) Homepage
        It's not a dead end. He is complaining that there seems to be no interest in studying the big picture. We are focusing in solving specific problems and creating "smart" systems that operate on a set of rules, maybe "learning" on the way to enhance the predefined rules originally installed by the creators.

        But the real AI comes when you create a 'stupid' system that is able to become smart through learning and training. He feels dissapointed because nobody (or almost nobody) is focusing in this direction.

        You have smart toys ala Aibo, and smart systems ala Eliza, and a lot of people is working towards creating smarter toys and smarter systems, but the real breaktrough will come when somebody manages to create the dumbest system possible.
      • by rpk ( 9273 ) on Tuesday May 13, 2003 @12:39PM (#5945953)
        Yep, logic-based AI is definitely at a dead end. Minsky won't admit it. A lot of AI researchers at MIT went to the Media Lab when they saw the writing on the wall.

        Robots are not a bad thing to work on if other kinds of AI are going to have a chance, because a more holistic kind of AI would recognize that intelligence and cognition first emerged as a function of having a physical body. On the other hand, it's just robotics, it's not AI itself.

        Also, AI was good for the hackers who supported its development on computer workstations. Systems like the Lisp Machine still compare very well to current languages and tools.
    • by Lemmy Caution ( 8378 ) on Tuesday May 13, 2003 @11:11AM (#5944942) Homepage
      I see Minksy's lament as sideways admission of the correctness of the west-coast, connectionist paradigm. It's a shame that he is still sabotaging useful lines of research at MIT: investigating robotics is built around the insight that our own "ontological engines" are themselves derived from our sensorimotor systems.
    • I very much enjoy the works of Markram and Tsodyks. What they mainly analyze is how two nerve cells can interact with each other. They showed how they change their connection weights and how the timing of spikes, nerve impulses, affect how neurons connect to each other and how they transmit information.

      While these studies tell us a lot about the underlying biology they do not tell us what these modes of information transmission are used for. For years it had been known that synapses have complex nonlinear properties. Biology pretty much does not constrain what functions neurons compute.

      Thats why I do not believe that such studies will bring us nearer to real AI anytime soon. The algorithms coming from these systems are severely underconstrained. A lot of modelling has followed the pioneering works of Markram and Tsodyks, one of them being Maas. All these algorithms are very fascinating and might yield insight into the functioning of the nervous system.

      The algorithms however are lightyears from being applicable to real world problems. The field of AI is old and in some sense quite mature. None of the "biologically inspired" algorithms today can compete with state of the art machine learning techniques.

  • by dfn5 ( 524972 ) on Tuesday May 13, 2003 @11:03AM (#5944857) Journal
    Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart.

    So hire and pay them money so they do real research instead of having fun. Otherwise quit your bitchin'. I personally think building stupid robots is cool.

  • Well... (Score:3, Interesting)

    by Dark Lord Seth ( 584963 ) on Tuesday May 13, 2003 @11:04AM (#5944867) Journal
    "Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up," reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff.

    I'd consider that pretty much intelligent, compared to some people I know. Then again, some people I know can hardly be described as sentient, let alone intelligent.

  • by Ogrez ( 546269 ) on Tuesday May 13, 2003 @11:04AM (#5944874)
    Humans measure intelligence by gauging replys to questions that have quantified answers. Giving an advanced computer a IQ test is sinply a matter of recalling the appropriate information from memory to answer. The true form of AI isnt about intelligence, its about reason. But how do you teach a computer to respond with an answer to a question that the computer has never encountered before... When we build a machine that can answer a question based on incomplete imput we will have made the first step in creating a machine that can "think"

    reply to MYCROFTXXX@lunaauthority.com
  • Biology First (Score:5, Insightful)

    by SuperMario666 ( 588666 ) on Tuesday May 13, 2003 @11:05AM (#5944879)
    How about we more thoroughly study and understand how human intelligence operates before we even presume to design something that imitates or rivals it in depth and complexity.
  • disappointing (Score:5, Insightful)

    by Gingko ( 195226 ) on Tuesday May 13, 2003 @11:14AM (#5944969)
    "As soon as we solve a problem," said Pollack, "instead of looking at the solution as AI, we come to view it as just another computer system."

    This is the most interesting comment to me. Because we understand the nature of the process that produces supposedly 'intelligent' results, (and we don't understand the same process in ourselves), we perhaps rightly just view the resulting system as just an application.

    Seems like Minsky is throwing all his toys out of the pram because he doesn't want to admit to what everyone else has been saying for a while: that whether a computer can think is at best an astonishingly difficult question to answer and at worst meaningless. I'm a grad student who's just spent a year looking at computational linguistics and semantics (amongst other things), and the most debilitating restriction on the semantic side of things is the problem of so-called 'AI-completeness', which essentially says that if you solve this problem you have a, externally at least, thinking computer. Really simple things like anaphora resolution are AI-complete in the general case. If we could have solved this problem by now, I think it's fair to say we would have done, given its massive importance. However, we know that the brute-force case is ridiculously intractable, and we can't figure out how to do it any more cleverly. Roger Penrose argues that this is due to the fundemental Turing-style restrictions that we place on our notion of computing. Until we get a paradigm shift at that level, we're likely never to solve the general case.

    And I'm sure that Minsky is aware that attempts to solve constrained domain inference and understanding have been taking place for a good long time now. I just don't see why he's so upset that the field of 'AI' (which is a nebulous catch-all term at best) has shifted its focus to things that we stand a cat in hell's chance of solving, and that have important theoretical and practical applications (viz. machine learning). Replicating human thought is not the be-all and end-all, and you can argue that it's not even that useful a problem.

    Robots, though, I agree with. Can't stand the critters ;)

    Henry
  • The Cyc project (Score:5, Informative)

    by pauljlucas ( 529435 ) on Tuesday May 13, 2003 @11:16AM (#5944994) Homepage Journal
    Although mentioned in a (lone) paragraph in the article, I don't know why we haven't heard more about the Cyc [cyc.com] project. Lenat's premise that you can't have intelligence without knowing the millions of "obvious" things about the world, aka, "common sense" seems intuitively right.
    • Re:The Cyc project (Score:4, Interesting)

      by Alomex ( 148003 ) on Tuesday May 13, 2003 @01:15PM (#5946347) Homepage

      This reminds me of a quote from French mathematician Henry Poincare: "just as houses are made out of stones, so is science made out of facts; and just as a pile of stones is not a house, a collection of facts is not necessarily science."

      Applied to the cyc project: a collection of facts is not necessarily intelligence.

  • by Otter ( 3800 ) on Tuesday May 13, 2003 @11:17AM (#5945004) Journal
    Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking.

    What? Grad students are doing tedious, repetitive, mindless labor instead of making glamorous, thrilling, world-changing breakthoughs? This is an outrage!

    Thank heaven I have a Ph.D. and got to spend this morning in the thrilling activity of drawing blood from 30 angry mice. I'm hoping to have the urine smell washed off before lunch.

  • by CowboyRobot ( 671517 ) on Tuesday May 13, 2003 @11:19AM (#5945021) Homepage
    Ask some 3-year-old kids which way is up, and they will all know, but they won't be able to define it. Yet, since computers don't have bodies, they don't have anything like the semicircular canals that we have, which act as gyroscopes and give us an intuitive, non-intellectual sense of which way is up.
    Trying to program intelligence purely with software puts the researcher at a disadvantage, since even the most fundamental rules and attributes of things (fire is hot, water is wet) have to be explicitly entered as constant variables.
    Once robotics advances to the point where mobility, vision, and speech recognition can be taken for granted, then AI can be programmed as an add-on.
    Body first, mind second - That's how animals evolved on this planet, and it's how, I believe, Rodney Brooks approaches this field.
    • The thing is robots can "easily" be simulated. There are lot of mechanical complexities that can be ignored if you simply let your AI's interact with a simulate environment instead of a real one.

      "Body first, mind second" sounds nice, but without reproduction and a mechanism for evolution you're not doing anything but creating an environment for your AI to interact with - you're not creating the pressures that caused evolution of intelligence in nature. So why go through the trouble of mechanics when you c

  • by duffbeer703 ( 177751 ) on Tuesday May 13, 2003 @11:19AM (#5945033)
    We do not understand how to control (or if it is even ethical to control) the billions of automonous intelligent agents roaming this planet... so why should creating a whole new class of intelligent automata be a priority.

    AI today has nothing to do with intelligence. Its all basically rule-based procedural programming. While this allows us to make some really neat applications like automatic vacuum cleaners and pool scrubbers, it has nothing to do with "intelligence".

    The human mind is not rule-based -- we impose a framework of rules to allow everyone to live together in relative harmony. The core of our being -- how our mind actually works -- remains an absolute mystery.
    • The human mind is not rule-based -- we impose a framework of rules to allow everyone to live together in relative harmony.

      So, when you see a cliff, you blindly walk off it? After all, there are no rules disallowing it, and it's not clear that anyone else's "relative harmony" would be disturbed by such a thing.

      Humans have many rules that they create and live by themselves - many having to do with self-preservation, self-actualization, and motivation - that have little to do with you simplistic explanation

  • Pot, meet Kettle (Score:5, Insightful)

    by ArghBlarg ( 79067 ) on Tuesday May 13, 2003 @11:21AM (#5945049) Homepage
    "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."

    Yeah, much more shocking than the -- decades -- he (and others in the 'hard AI' camp) have been spending? They've made oh-so-much more progress, haven't they?

    Rodney Brookes made more progress with his robots in the early nineties than the whole hard AI camp did in 3 decades. I remember seeing a documentary once comparing this huge robot which used a traditional procedural program to navigate through a boulder-strewn field. It took about 3 HOURS to decide where to put its foot next. Meanwhile, little subsumption architecture-based robots were crawling around like ants, in real-time. (Oh, and some of them had to learn to walk from first principles every time they were turned on -- only took about half an hour!) That's the most damning evidence of the failure of hard AI I can think of.

    As others here have said, what good is a brain until we get a useful BODY working? Manueverability and acute senses are a must before an artificial intelligence can do anything useful, or learn from its environment effectively.
  • Great Minsky Quote (Score:3, Insightful)

    by dmorin ( 25609 ) <dmorin@@@gmail...com> on Tuesday May 13, 2003 @11:22AM (#5945056) Homepage Journal
    Paraphrased, but the spirit is there: "And when a robot actually does succeed and walk down the hall and through the door, or whatever it's supposed to do, you've learned absolutely nothing because it may not do it again the next time. This is why mechanical engineers love their videos so much. With video they can say 'See, it really worked once!'"
  • by Wellspring ( 111524 ) on Tuesday May 13, 2003 @11:23AM (#5945069)
    If Marvin wants to know why AI hasn't made major strides in the past 30 years (and, by the way, I would say that it has) he should look no further than his own bullying, arrogant approach.

    Promising subfields like perceptrons were intentionally quashed by him... he went out of his way to strangle investment and research in areas he considered to be a dead end. We're not literature majors: we can't just all say the same thing in a party over wine and cheese and call it progress.

    Even bad ideas, when well explored, can give new meaning and better approaches to a field. And since this is research, noone knows the correct answer: even a dumb-seeming idea may turn out to be the right one-- or give us clues about features the right answer needs to have.

    Of course we've had major advances in AI. One of the challenges of AI, as the article points out, is that once something is well understood, it is defined as being outside the AI field. Computer vision, face recognition, voice and speech recognition. Conversation engines like SmarterChild. No, this isn't HAL, but they are good, positive steps in the right direction.
    • Promising subfields like perceptrons were intentionally quashed by him...

      His take at perceptrons was vaild and well-founded. They indeed suffered from linear separability problem.
      • The original poster probably meant neural networks more than perceptrons. By the time his paper was published (1969) neural networks were far more advanced than a simple perceptron, and had easily overcomed the linear separation problem. Some people claim he along with Papert engineered this paper to get a juicy DARPA grant that was just about to be assigned. His paper efectively killed research in this area for almost 20 years.

        Minsky has always been a bit of a weasel and knows very well how to pull the st

    • Speech recognition did not come out of AI initiatives. After expensive and fruitless attempts to apply AI techniques to speech recognition, researchers with statistical and signal processing backgrounds applied made substantial progress on speech recognition during the 1990's. The core search algorithm came from ones that modems use to extract digital signals from noisy signals.

      The speech recognition community also investigated techniques using neural networks, although they did not produce a clear vin o

  • Old guard moving out (Score:5, Interesting)

    by CmdrSanity ( 531251 ) on Tuesday May 13, 2003 @11:25AM (#5945096) Homepage
    I took Minsky's class last year, and let me tell you, the article couldn't print 75% of the irate stuff he has to say about AI, MIT, and life in general. We once spent an hour class session listening to Misky rant about modern science fiction and random things he didn't like about his Powerbook. In fact, most of his classes were extended rants about something or other (you zealots will be happy to know that he too, hates the Microsoft).

    He comes across as affable but bitter. I found it strange that though he cointually complains about the leadership of the AI lab, he and his protege Winston were in control of it for some ~30 years without making any groundbreaking progress. In fact, Minsky's latest work "The Emotion Engine" is simply a retread of his decades-old "Society of Mind." I suspect that now that Brooks and the new guard are moving in, the old guard is looking for someone blame its lack of results on.
  • Minsky + Brooks (Score:4, Interesting)

    by Bob Hearn ( 61879 ) on Tuesday May 13, 2003 @11:35AM (#5945199) Homepage
    Here's some perspective from an MIT AI lab grad student who's been inspired by both Minsky and Brooks. (Minsky is on my Ph.D. committee.)

    "AI has been brain-dead since the 1970s."

    I agree, unfortunately. At least, what was traditionally meant by "AI" has been brain-dead. There is very little focus in the field today on human-like intelligence per se. There is a lot of great work being done that has immediate, practcal uses. But whether much of it is helping us toward the original long-term goal is more questionable. Most researchers long ago simply decided that "real AI" was too hard, and started doing work they could get funded. I would say that "AI" has been effectively redefined over the past 20 years.

    "The worst fad has been these stupid little robots."

    Minsky's attitude towards the direction the MIT AI lab has taken (Rod Brooks's robots) is well-known. And I agree that spending years soldering robots together can certainly take time away from AI research. But personally, I find a lot of great ideas in Rod's work, and I've used these ideas as well as Marvin's in my own work. Most importantly, unlike most of the rest of the AI world, Rod *is*, in the long run, shooting toward human-level AI.

    Curiously, just last month I gave a talk at MIT, tited "Putting Minsky and Brooks Together". (Rod attended, but unfortunately Marvin couldn't make it.) The talk slides are at

    http://www.swiss.ai.mit.edu/~bob/dangerous.pdf [mit.edu].

    In particular, I shoot down some common misperceptions about Minsky, including that he is focused solely on logical, symbolic AI. Anyone who has read "The Society of Mind" will realize what great strides Minsky-style AI has made since the early days. I also show what seem like some surprising connections to Brooks's work.

    - Bob Hearn
  • by Helpadingoatemybaby ( 629248 ) on Tuesday May 13, 2003 @11:40AM (#5945240)
    Marvin Minsky has done more damage to progress in AI over the last few decades than any other person.

    Before you yell flamebait or troll, let me explain.

    I have been following the progress of various AI technologies, including neural nets and adaptive logic networks, for many many years now. Years ago perceptions were first developed and it was shown that they could learn simple patterns. Perceptrons were basically two layers of software simulated neurons. They worked, and researchers were fascinated and worked on them regularly.

    Minsky, being the "highly regarded" and "leader" in AI, wrote a paper that proved that these perceptrons could never learn more complicated patterns, and threw a bunch of math at the reader. So people stopped. After all, there was a mathematical proof that perceptrons weren't going anywhere. Research skidded to a halt for decades because of Minsky.

    Of course, then someone developed the (gasp!) THREE layer perceptron/neural net and sure enough with the right formula it could learn much more complicated tasks.

    Minsky, in my opinion, does this regularly. The problem is, that he has a reputation in the industry as being a leader (I'm not sure why).

    He's already lost us two or three decades of research because of his "leadership" -- I am terrified that he might cost us more development into the future.

    Where could we have been if Minsky wasn't always going around half cocked, screaming that he is right? "Robots are useless!" is history repeating itself and him trying to get more press. Keep developing guys, just ignore the peanut gallery. There's always someone who says it can't work (ahem, Minsky) -- it can and it will.

    • Critical? (Score:5, Insightful)

      by nuggz ( 69912 ) on Tuesday May 13, 2003 @12:32PM (#5945876) Homepage
      So Minsky has an opinion, he expresses it, he provides data, calculations and evidence to support it.

      People just accept it, and progress is delayed.

      Why is it his fault that there are so many followers? If anything is to be blamed is that these researchers just blindly follow whatever he's saying rather then take a good critical look at what is going on.

      If his math and theory "proved" that an area of AI was a dead end, and it wasn't, his math/theory was wrong. It is a sad state when nobody dare challenge the status quo.
  • I wonder what... (Score:3, Insightful)

    by sielwolf ( 246764 ) on Tuesday May 13, 2003 @11:41AM (#5945255) Homepage Journal
    Rodney Brooks (head of MIT's AI lab) has to say about this since he is the genesis of the "cheap, fast, and out of control" school of AI that Minsky is deriding here. But since rantage like this from Minsky isn't new, I bet Brooks takes it in stride.

    About 20 years ago AI was going down the craperroo until folks like Brooks decided that the AI field would be better served by moving it from the more theoretical GOFAI method to a more applicable style. Revitalized everything.
    • Re:I wonder what... (Score:3, Interesting)

      by studboy ( 64792 )
      "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'

      Rodney Brooks (who's The Man) said something like "a [working] robot is worth a thousand papers." Instead of a top-down view, subsumption architechture robots have a tight connection to sensing and action, but often no memory. One such robot was able to search out, find and grab empty coke cans, then take them to the trash!

      (semiquote from Steven Levy's "Artificial Li
  • by Jerk City Troll ( 661616 ) on Tuesday May 13, 2003 @11:44AM (#5945279) Homepage
    People who do perform illusions and escape tricks have been doing things mostly the same way for decades. Magic tricks may change slightly, but all the basic principles and tricks are the same. There's no real evolution, just adaptation to please the crowd.

    Now, with that in mind, let's look at artificial intelligence. AI has always been about trying to convince an audience that a machine is thinking. This is demonstrated by the very existence of the Turring test and many products (such as the Aibo, Furby, etc) that try to mimick emotions. If the audence is entertained, amused, or convinced, the AI is considered good. Bad AI is when the audience can see right through it.

    Artificial intelligence is magic. It's a trick. It's an illusion.

    It is no surprise then that AI hasn't really advanced. The trades of showmen are practically unchanged for hundreds of years. Razzle-dazzling an audience involves technological advances, but it remains unchanged. Even in the cases where "artificial intelligence" is used to aid in medical diagnosis ("expert systems") or manufacturing are really only following man-made logical structures. The computers aren't thinking, they're only doing what they're told to do, even if indirectly. The end result is impressed people who think the machine is smart.

    Of course, you don't have to take my word for it. If you want to see how badly AI is going nowhere, I hightly recommend reading The Cult of Information by Theodore Roszak [lutterworth.com]. While his focus is not on the fallicy of AI, it covers it in context with the much broader disillusionment of computers by society.

    Now, what does AI need in order to progress? Probably AI creating other AI. Something with a deeper embodiment of evolution. As long as it's man-made, it will never be intelligent, just following a routine. Of course, I am going to stop right here... I am not qualified to offer a solution these obstacles.
  • by Snot Locker ( 252477 ) * on Tuesday May 13, 2003 @11:45AM (#5945299)
    They forgot to add this quote from Minskey:

    "How the hell am I ever going to be able to download my brain into these damn little robots if they don't hurry up and make them smarter? I running out of time, dammit!!!"

  • by tenzig_112 ( 213387 ) on Tuesday May 13, 2003 @11:57AM (#5945451) Homepage
    It's simple hateful biggotry. Look at the way sentient machines are treated in the anti-AI media.
    • Autonymous robots may appear benign, but it won't last past the second reel. Even Robbie went bad.
    • Even when AI's are good for a lengthy period, this is only an attempt to make the audience sleepy in order to somehow forward their evil agenda.
    • When the chips are down and a basic task is required, the AI breaks down and gets all huffy about it [see: "I can't do that, Dave."].
    • When Terminator came back and was helpful, he was noted as "one of the good ones" a hallmark of a biggoted mindset.



    People justify their robophobia by pointing to these fictional examples, but if recent murder statistics are to be believed, the score is a bit lopsided.


    Humans: 1,453,242,122 Robots: 0


    This kind of prejudicial attitude must end.

  • by panda ( 10044 ) on Tuesday May 13, 2003 @11:59AM (#5945476) Homepage Journal
    > Cyc knows that trees are usually outdoors, that once people die they stop buying things...

    Hey, that thing is already smarter than the companies that continue to send junk mail to my grandfather who has been dead for 22 years, now. Maybe they should get that software to manage their mailing list?
  • by Ella the Cat ( 133841 ) on Tuesday May 13, 2003 @12:09PM (#5945602) Homepage Journal

    Human Level AI's Killer Application - Interactive Computer Games, John E Laird and Michael van Lent American Association for Artificial Intelligence AI Magazine Summer 2001 pp 15-25

    My summary of the above - the AI in games might not be too hot (some would dispute with the academics about that but let it go), but game environments themselves are complex enough to pose a challenge for state-of-the-art AI researchers.

  • by 5n3ak3rp1mp ( 305814 ) on Tuesday May 13, 2003 @12:18PM (#5945713) Homepage
    (Bits On Our Mind, an exhibition of some undergrad and graduate computer science work) ...and I headed STRAIGHT for the nematode booth. You see, I had heard that some clever Cornellian had created a simulation of the entire neural network of a nematode. The way I saw it, there was nothing else there that could possibly be more interesting than that.

    So I found myself standing in front of a computer screen. It was a worm swimming through water! In 3D! In real time! After I pushed my jaw shut, I began to ask the genius student some questions...

    "Is that real-time?" "Well, actually, no, that is a 10 second looping clip that took a week to calculate."

    "Well, I see a neural map there. Is that complete?" "Well, actually, no, that is a simplified version of the real nematode nervous system, on the order of about 1 simulated neuron to 10 actual neurons."

    "So you simulate neurons! That's awesome. Let's see the code." (He proceeds to flip through 4-5 pages of very sophisticated-looking mathematical equations to describe the behavior of ONE neuron.)

    What a let-down! No wonder Minsky is pissed, real AI is HARD! :P
  • by YllabianBitPipe ( 647462 ) on Tuesday May 13, 2003 @12:21PM (#5945748)
    We will soon have hardware that has the number of connections or processing power of a human brain. The problem is nobody's come up with the software to run on it. In humans this is what makes the brain more than big organ ... the "soul" if you are religiously inclined. Maybe a human soul can be reduced to nothing more than a program with an enourmous propesity to learn and adapt over years of training / habituation ... say from the years 0 to 18.
    • No machine, algorithim or other man made item has anything close to the processing power of a human brain, not now or into the forseeable future. The idea of AI has never been about software as the concept is quite simple even in practice. The issue has always been about hardware; the hardware gets faster but it's still no where near fast enough to compete with anything other than a small toddler; feed it too much information and it becomes slower, infact it starts to retard after a certain stage, which mak
  • by elmegil ( 12001 ) on Tuesday May 13, 2003 @12:33PM (#5945881) Homepage Journal
    If there were actually interesting work in AI that didn't involve soldering little robots together, there'd be people doing it. But there isn't, really, much that's generally appealing in the field at Minsky's level or using his approaches.

    I suspect part of Minsky's problem is frustration that his ideas about AI aren't bearing fruit, so he's going to take it out on other people's different approaches. It's not much different from the Perceptrons paper he co-wrote in the late 60's that nearly killed Neural Network research for most of the next decade. Never mind that there was plenty of useful neural network research to be done that avoided the failings of the perceptron model.

    In my opinion, if we had the wholistic understanding of intelligence that would let us use Minsky's type of approach to AI, there wouldn't be anything left to do but implementation! One cannot just a priori assume all principles of intelligence by self-examination, and that's where he fails. There are interesting things to learn in that approach, but a large number of them have already been learned, so people are turning to other means (bottom up approaches focussing on self organization are doing well and leading to new discoveries) to get a broader understanding of what is involved in intelligence.

    Just because Minsky has sour grapes doesn't mean that the robot people aren't doing useful research.

  • AI winter II ? (Score:5, Insightful)

    by alispguru ( 72689 ) <bob,bane&me,com> on Tuesday May 13, 2003 @12:37PM (#5945943) Journal
    What Minsky is actually flaming about here is the damage done to the field by the original "AI winter" and the real possibility of a new AI winter starting in a few years.

    "AI winter" [216.239.51.104] is the name given to the collapse of strong AI as a business model in the mid 80's - expert systems and symbolic AI in general didn't deliver on their promises, and so the money went away. As a guy who got his doctorate in AI in 1985, I can tell you all about it. ;-)

    One of the major causes of AI winter was researcher hubris - lots of people hacked up systems that appeared to solve 80 percent of certain complex problems and then said "all that stands between us and a complete solution is money and time". For many of those systems, solving the last 20 percent would have taken 2000 percent of the time, if it could have been done at all. The tragedy of AI winter, though, is that basically all of symbolic AI was abandoned, though some of it is creeping back out into the light with obfuscated syntax (see my .sig below).

    What Minsky sees here is a lot of people heading down the same path, but with neural nets and small robots instead of expert systems. The new systems are doing some interesting things relative to the old symbolic AI systems (though they do have the advantage of 20 years of Moore's law to help them). But, will they scale up? Right now, nobody knows. If they don't, the last thing the field needs is another cycle of overpromise/underdeliver/abandon.

    Maybe AI is just plain hard, and cracking it will take longer than one or two computer industry business cycles.
  • "Al Going Nowhere?"

    well duh!

    what idiot made the lowercase L and uppercase I look the same?

    but, since we're on the subject, did you know Al Gore invented the field of AI?

  • by SlayerDave ( 555409 ) <elddm1@g m a i l .com> on Tuesday May 13, 2003 @12:56PM (#5946137) Homepage
    Let's not forget that Minsky (along with Papert) declared that single-layer perceptrons were basically useless and there was no reason to suspect that multilayer varieties would be any better. That little, wrong statement basically shut down all neural networks research for ~20 years. Research didn't recover until the mid 1980's with the discovery of the backpropagation algorithm, along with other neural architectures such as the Hopfield network, ART, and the self-organizing map. Advances in neural modeling have helped theoretical neuroscientists begin to understand the nature of neural processing in the brain, no thanks to Minsky.

    Minksy belongs to the old school of AI thinking. These guys believe that it is possible to make statements about intelligence itself, without considering the interactions of the organism/agent with its environment or the underlying architecture of the brain/CPU. I think that the total failure of this style of thinking to produce anything interesting in 50 years proves that this approach is sterile. Minsky laments the fact that graduate students build robots, but this activity exposes students to the challenges of constructing a device that must actually interact with the environment. It is ridiculous to assume that you could design a system capable of intelligent behavior without ever confronting the problems of sensors and actuators. Almost every part of the brain is devoted to processing raw sensory input or generating motor output. One cannot simply design an intelligent system without worrying about sensory input and behavioral output. The CYC project of Lenat has the laudable goal of teaching a machine "common sense" by hard coding a vast database of simple statements like "Trees cannot walk". This is a totally wrongheaded approach to learning and reasoning, and is typical of old school, hard AI.

    We will only make progress in engineering intelligent, adaptive systems by studying actual examples of intelligent, adaptive systems, namely animals. Neuroscientists and psychologists are beginning to embrace the tools of mathematical modeling and simulation wo help explain nervous system structure and function. Computer scientists would do well to similarly embrace the work of experimental neuroscience.

    Minsky is a dinosaur.

  • by bigpat ( 158134 ) on Tuesday May 13, 2003 @12:59PM (#5946175)
    Seems like AI has progressed about as far as it can go inside the box, only machines that can proactively interact with an environment as people do will learn to think like people.

    Imagine what kind of thing you would be without vision, touch, smell, hearing or the ability to move and change your environment. Without these forms of interaction where would human intelligence be?

    Seems that a Budhist philosphical approach is most helpful here, ie we are our parts, not more and not less. We are what we are. If you wish to create something that is like a human, you should take an inventory of our parts figure out how they fit together and try to find analogous electronics, software and hardware.

    Which is precisely what a lot of the robot folks have started doing. Except that most have started a bit smaller and have modeled insects instead. Finding that they can model seemingly complex insect behavior with simple algorithms and machines.

    Although, perhaps the next best step isn't building real robots at all, which can be expensive, error prone and time consuming, but building virtual robots that can be placed in virtual environments of our invention, somewhat like a "Matrix" virtual reality with intelligent agents that can learn. This approach is more computer intensive, since the environment as well as the agent would require large amounts of computing resources, also, the agent would have to perceive the "environment"

    Seems that many more forms of human nature could be investigated in this way.

  • by dprice ( 74762 ) <daprice.pobox@com> on Tuesday May 13, 2003 @01:01PM (#5946194) Homepage

    From 1983 through 1991, George Heilmeier was the Chief Technical Officer at Texas Instruments. He pushed TI into massive investments in AI R&D. Some of the best technical people I knew at TI thought the AI stuff was a waste of time, but it was being pushed by Heilmeier and the executives. Marvin Minsky was one of the experts brought in as an AI consultant, and he appeared in various TI propaganda. At the time the Japanese were pushing "fifth generation computing" which included AI, so there was a push to compete with the Japanese. TI developed AI hardware and software and tried to force fit it into various applications. They claimed various successes applying AI to industry problems, but eventually is all collapsed into a big waste of time and money. Heilmeier left at the end of the collapse.

    Today you can find Heilmeier all over the place on various corporate boards and winning various awards for technical excellence. It is interesting that in most of the the bios that you can find on the web about Heilmeier, you don't find references to how he lead TI down the AI path to a deadend.

  • by porky_pig_jr ( 129948 ) on Tuesday May 13, 2003 @01:09PM (#5946292)
    Marvin Minsky considers intelligence is something like the set of absolute rules, which can be separated from both the subject posessing this feature ('intelligence') and the real world. The reality however is that intelligence is simply a tool for survival in a hostile environment, just another stage in evolutionary process. It has been evolved in specific circumstances, and will continue evolving. It *may* appear as the absolute set of rules (different religious believes is a good example), but this is appearance only. The bottom line is that you have to build the creature (robot or whatever) which interacts with other like creatures and the environment and try to adapt to both. Intelligence may appear as a by-product of this adaptation process. So -- building the robot with intelligence of worm is more productive than trying to simulate Einstein in some computer program. The AI as defined by Minsky and Co. is an emperor's new cloths. This is entirely *his* fault AI didn't go anywhere. The whole concept of separating the intelligence from the background it has evolved is wrong. Only someone who believes that 'mind', 'intelligence', etc is something which was given us by God ('mind' vs 'body') can continue insisting on this approach, but it is no longer a science if you wish, but religion or methaphysics.
  • by jefeweiss ( 628594 ) on Tuesday May 13, 2003 @01:27PM (#5946487)
    It seems to me that the focus on robotics, and insisting computers become good at human thinking tasks is a limited view of what artificial intelligence could be.

    If you were put inside a little white box where you had to flip millions of switches on and off according to certain simple rules, you would look like an idiot next to a computer. A computer can't walk around and recognize things, and doesn't know what an apple is, so what? In my opinion, machine intelligence should be focused on making computers able to make themselves better at what they do best. I'm not sure what a super intelligent computer system would be used for, and I don't think that I would even be able to imagine what would be possible. I would be interested to know what other people think about this idea. Most of the things that I can think of tie back into the "real" world somehow. What would a self-organizing non 3-dimension oriented intelligence be able to do?

    Saying that AI is impossible because computers can't come into "our world" of three dimensions, or understand our literature is kind of intelligence chauvinism.

  • by geekee ( 591277 ) on Tuesday May 13, 2003 @01:29PM (#5946519)
    Stallman and co. hadn't spent so much of their time rewriting unix *GNU), instead of working on AI, like they were supposed to be doing. :-)
  • by jungd ( 223367 ) * on Tuesday May 13, 2003 @02:28PM (#5947263)

    As an AI researcher and someone who's read Minsky's books and listened to him talk - I can say that he doesn't know what he's talking about. He was big in his time, but things have moved on and he hasn't. He is an old, pesimistic, armchair AI 'researcher' who still thinks AI is easy. He doesn't understand why AI needs to be embodied and situated.

    Having said that, I do agree that AI is almost going nowhere (anyone can see that). But I don't believe Minsky understands why.

    Those 'stupid little robots' are the best thing to happen to AI - unfortunately most AI 'researchers' don't really understand what they're doing. Consequently, 97% of the time and effort purported being spend on AI research, isn't.

    With a few exceptions, the main reason for the 'advances' we're seeing in AI/robotics now, is that algorithms are riding the wave of advances in computing power.

    My guess is that you'll see most of the advances in AI coming as more and more 'real scientists' from other disciplines - such as ethology, biology and neurology - get involved in it.

    Keep in mind that this is my opinion - shared by an increasing number of people in the field, but still a small minority.

  • by Ratbert42 ( 452340 ) on Tuesday May 13, 2003 @03:54PM (#5948245)
    Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart.

    This is a classic battle between Minsky and Brooks. Heck, we had the same battle in our labs (not MIT). I believe that the Brooks response is along the lines of "sure you'll take an extra year to graduate with me, but you'll have one hell of a demo tape." I agree with Brooks. I still show people videos of one of my robots years later. I've never shown anyone any of my simulated robot work afterwards.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...