Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

AI Going Nowhere? 742

jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
This discussion has been archived. No new comments can be posted.

AI Going Nowhere?

Comments Filter:
  • What about my AIBO? (Score:3, Interesting)

    by khalua ( 468456 ) on Tuesday May 13, 2003 @10:56AM (#5944786) Homepage
    It can pick me out in a crowd, and it can show a number of emotions, such as surprise, anger, and boredom.... yawn.
  • by jdoeii ( 468503 ) on Tuesday May 13, 2003 @11:00AM (#5944833)
    It's not the AI which is going nowhere. It's the traditional approaches to AI such as Minsky's symbolic logic which are not going anywhere. Seach google for Henry Markram, Maass, Tsodyks. Their research seems very promising.
  • google cache (Score:2, Interesting)

    by akaina ( 472254 ) on Tuesday May 13, 2003 @11:03AM (#5944855) Journal
    I don't know if anyone has a google cache of aination.com, but I had a similar comment back in 2000 in the 'Works' section regarding the works of the MIT press which have recently proved as useful in developing true AI as these robots.

    For REALLY good insight check out Nick Bostrom's articles on Super Intelligence here: http://www.nickbostrom.com/
  • Well... (Score:3, Interesting)

    by Dark Lord Seth ( 584963 ) on Tuesday May 13, 2003 @11:04AM (#5944867) Journal
    "Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up," reads a blurb on the Cyc website. Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff.

    I'd consider that pretty much intelligent, compared to some people I know. Then again, some people I know can hardly be described as sentient, let alone intelligent.

  • by jdoeii ( 468503 ) on Tuesday May 13, 2003 @11:04AM (#5944870)
    > You'll never have real, true intelligence

    Define "real, true intelligence" :-)

    > You can try to simulate that, but so far
    > simulation consists of what amounts to a
    > gazillion 'if' tests

    That's what tradiditonal AI school is doing. Yes, you are correct. It won't go anywhere. On the other hand spiking neural networks are very promising. Search google for "liquid state machine". These researches are making progress novadays, not Minsky.
  • by Surak ( 18578 ) * <surak&mailblocks,com> on Tuesday May 13, 2003 @11:07AM (#5944894) Homepage Journal
    The problem with that is that no one really *knows* how the brain works beyond a very, very basic and limited understanding. No one has ever been able to satisfactorily create/reproduce one. There's more going on than just synapses in there, that much most scientists can agree on. What they don't agree on is *what* else is going on in there.

  • by chubso ( 524639 ) on Tuesday May 13, 2003 @11:10AM (#5944927)
    No kidding! Industry came to this conclusion 30 years ago. You can't make things "smart" if you don't know what smart is. Come up with a real useful definition of "intelligence". Apply simple engineering concepts to the problems instead of rushing to the "Statistical Death Spiral" where we generate reports with bad statistics to get paid form some research.
  • by Lemmy Caution ( 8378 ) on Tuesday May 13, 2003 @11:11AM (#5944942) Homepage
    I see Minksy's lament as sideways admission of the correctness of the west-coast, connectionist paradigm. It's a shame that he is still sabotaging useful lines of research at MIT: investigating robotics is built around the insight that our own "ontological engines" are themselves derived from our sensorimotor systems.
  • by Roelof ( 5340 ) on Tuesday May 13, 2003 @11:12AM (#5944950) Homepage
    Sure, but the whole point of AI was that we were supposed to be able to ask it where it thought it was going... and that not only it would know, but would give a well thought out answer too!

    Roelof
  • by sohp ( 22984 ) <.moc.oi. .ta. .notwens.> on Tuesday May 13, 2003 @11:19AM (#5945027) Homepage
    Indeed, GOFAI and the computational model of human intelligence is where Minsky and his ilk have been stuck for decades. As many of the other replies in this topic show, the traditional idea that the brain is a sort of fleshy collection of logic gates is still the most common belief. There are many authors that have written and demonstrated that the brain probably doesn't function as a mass of context-free predicate logic rules -- including my favorite, Hubert Dreyfus. [berkeley.edu]

    The progress of AI is uncertain, but it is certain that there's no future for symbolic logic AI.
  • by PhilHibbs ( 4537 ) <snarks@gmail.com> on Tuesday May 13, 2003 @11:22AM (#5945060) Journal
    What they need, then, is for an engineering student to do their masters dissertation on creating a generic physical framework for AI systems, or a computing student to do theirs on a generic simulation environment for virtual AI 'bots. Then this can be re-used by AI students in subsequent years. Alternatively, each year they could team up engineering students working on the physical robots with computing studetns working on the AI systems, that way both departments are working on their core speciality.
  • Re:Hrmm (Score:3, Interesting)

    by kevin42 ( 161303 ) * on Tuesday May 13, 2003 @11:22AM (#5945061)
    No!

    It drives me crazy that people are so concerned about possible technologies, that they want to "slow down and think about the consequences of xxx".

    This is really just unfounded fear. While we still don't know if something is possible, is not the time to worry about what problems we can concieve that it will bring. Knowledge is more important than worrying about some issues that may or may not arise if we are able to do something. It is good to ask "If we cause this atom to split, will it kill us?", but I do not think there is any value in saying "Maybe we shouldn't find out what happens if we split this atom, because if it causes an explosion, someone might use that knowledge to build a bomb..."

    One of my favorite quotes is from Isaac Asimov:


    Suppose that we are wise enough to learn and know and yet not wise enough to control our learning and knoweldge, so that we use it to destroy ourselves? Even if that is so, knowledge remains better than ignorance. It is better to know even if the knowledge endures only for the moment that comes before destruction than to gain eternal life at the price of a dull and swinish lack of comprehension of a universe that swirls unseen before us in all its wonder. That was the choice of Achilles, and it is mine, too.
    -- Isaac Asimov


    I'm sure a lot of people will disagree, but to me, knowledge is most important.
  • Re:AI...heh (Score:1, Interesting)

    by Anonymous Coward on Tuesday May 13, 2003 @11:24AM (#5945080)
    As long as it appears to have the ability to think "consiously", then that is good enough. Wether we need to try and prove that something really is consious is another matter, and we can leave that upto the more dedicated AI researches and philosyphers.

    If you think about it, how can you prove anyone other than yourself is consious?

    Taking that further, how can you know that the reality you percieve is even real?

    Ahhhh, the sound of neurons frying..
  • All of the spiking networks I've seen were nothing more than state machines that depend on numeric comparisons.

    But I'm not an expert, and that's just my personal opinion.
  • Old guard moving out (Score:5, Interesting)

    by CmdrSanity ( 531251 ) on Tuesday May 13, 2003 @11:25AM (#5945096) Homepage
    I took Minsky's class last year, and let me tell you, the article couldn't print 75% of the irate stuff he has to say about AI, MIT, and life in general. We once spent an hour class session listening to Misky rant about modern science fiction and random things he didn't like about his Powerbook. In fact, most of his classes were extended rants about something or other (you zealots will be happy to know that he too, hates the Microsoft).

    He comes across as affable but bitter. I found it strange that though he cointually complains about the leadership of the AI lab, he and his protege Winston were in control of it for some ~30 years without making any groundbreaking progress. In fact, Minsky's latest work "The Emotion Engine" is simply a retread of his decades-old "Society of Mind." I suspect that now that Brooks and the new guard are moving in, the old guard is looking for someone blame its lack of results on.
  • by Roger_Wilco ( 138600 ) on Tuesday May 13, 2003 @11:28AM (#5945125) Homepage

    I personally built and programmed one of these "stupid little robots"; it's a wheelchair programmed to navigate in an office environment, using vision to determine where in the office it is. Nobody asserts that it can "reason". It navigates using a collection of local effects, in much the same manner that simple creatures operate. Watch the film "Baraka" for some rather amusing examples. At one point the film shows a bunch of caterpillars, each following the scent trail of the next --- unfortunately someone flipped the first one around, so it follows the last, and the whole colony just moves around and around until they die of starvation.

    I think you would be surprised how easily remarkably complex behaviours can be achieved by a collection of very simple responses. Try fiddling around with Rossum's Playhouse [sourceforge.net], and read Brooks' book Cambrian Intelligence.

  • by gacp ( 601462 ) on Tuesday May 13, 2003 @11:31AM (#5945151)

    The problem with AU lays in poor biology. As long as it is based on pre-cybernetic (i.e. traditional, neodarwinian) biology, AI will never go anywhere. The only known intelligent systems are biological systems. To create AI, you need to imitate biology, you need to reverse-engineer what it is exactly that make biosystems special. But traditional biology has totally misled computer science. Pre-cybernetic biology, the biology you find in most books and the one taught in almost any classroom, cannot even define life. This pseudo-biology is the `biology' of the non-living, and as such, of the non-intelligent.

    To create AI, you need to understand natural intelligence (NI) and for this you need to understand life. What is life? Cybernetic biology defines life as molecular autopoiesis. Which is interesting, since this definition of life is based on computation. Autopoiesis is the key here. The self-re-computation of a system is the key to life, and the key to intelligence, because you need a self to be intelligent. With an artificial self, we could have AI, and probably self-awareness. But good biology is the key.

    ? Unfortunately, it's not going to happen anytime soon. Biology is totally stagnant, and the Neodarwinian Cabal precludes any progress and silences any dissent (sort of a M$ of the science market). `Official' bilogical sciences just won't deal with life. And that's not going to change for a while, I'm afraid, no matter how hard sone of us try.

  • by NorthDude ( 560769 ) on Tuesday May 13, 2003 @11:32AM (#5945167)
    And what about you and me?
    We act based on external stimuli and based on what we have learned as far as I know.

    Unfortunately, we will never fully understand how we are "made" and how we "work".
    And without being able to fully introspect ourselves, we will never be able to build a computer which works exactly like a human.
    How could you possibly create something to be a replica of something you don't understand?
    Cognitive science has made immense progress, but it is still all models and theory.
    And as human, "logic" animals, we will always be modeling what we are learning to fit inside our own "understanding". We are locked in our own box...

    And if it is all "maths" or "logic", a computer can do it to. I am pretty sure that not so far in time, we will see robots who act very much like a human being.
    Will it be considered a real human being because of it? Will it really be an "intelligent" machine?
    I don't know, that's not a technological debate, it is a philosophical one. How do you define real AI anyway?
    Does it have to be "alive"? If I ever create a unicellular bacteria, and that it is alive, is this considered "AI"?
    In this case, it would be well alive and totally artificial, but not very smart by any measures!
    On the other hand, if I create a robot which looks like a human, has flesh, eat food, cries, smiles, makes mistake, learns, have fun etc etc.
    Will this be true "AI"? It wont be alive after all, it will only be made of steel, a cpu, platics, millions of "if statements".
    But to anyone looking at both, I'm sure this one would look a lot more "intelligent" then the bacteria.
    If the robot body in itself is realistic enough, maybe you could even fall in love with it, could you?
    And what if it falls in love with you also? That everything is going fine for a couple of years
    before you realize it is in fact a "robot"? Would you turn around because it is not real "intelligence" or because it is not a biological body?
    In that case, could we conclude that we are ourself programmed to "accept" that something is inteligent based on criteria that have nothing to do with inteligence per see?

    What is inteligence anyway? How do we measure it?
    I am not flaming you at all by the way, I just love those debates ;-)
  • Minsky + Brooks (Score:4, Interesting)

    by Bob Hearn ( 61879 ) on Tuesday May 13, 2003 @11:35AM (#5945199) Homepage
    Here's some perspective from an MIT AI lab grad student who's been inspired by both Minsky and Brooks. (Minsky is on my Ph.D. committee.)

    "AI has been brain-dead since the 1970s."

    I agree, unfortunately. At least, what was traditionally meant by "AI" has been brain-dead. There is very little focus in the field today on human-like intelligence per se. There is a lot of great work being done that has immediate, practcal uses. But whether much of it is helping us toward the original long-term goal is more questionable. Most researchers long ago simply decided that "real AI" was too hard, and started doing work they could get funded. I would say that "AI" has been effectively redefined over the past 20 years.

    "The worst fad has been these stupid little robots."

    Minsky's attitude towards the direction the MIT AI lab has taken (Rod Brooks's robots) is well-known. And I agree that spending years soldering robots together can certainly take time away from AI research. But personally, I find a lot of great ideas in Rod's work, and I've used these ideas as well as Marvin's in my own work. Most importantly, unlike most of the rest of the AI world, Rod *is*, in the long run, shooting toward human-level AI.

    Curiously, just last month I gave a talk at MIT, tited "Putting Minsky and Brooks Together". (Rod attended, but unfortunately Marvin couldn't make it.) The talk slides are at

    http://www.swiss.ai.mit.edu/~bob/dangerous.pdf [mit.edu].

    In particular, I shoot down some common misperceptions about Minsky, including that he is focused solely on logical, symbolic AI. Anyone who has read "The Society of Mind" will realize what great strides Minsky-style AI has made since the early days. I also show what seem like some surprising connections to Brooks's work.

    - Bob Hearn
  • by Anonymous Coward on Tuesday May 13, 2003 @11:36AM (#5945215)
    Think of it. Only a stupid robot is a good robot.
  • by CausticWindow ( 632215 ) on Tuesday May 13, 2003 @11:36AM (#5945218)

    I think this knowledge will reamin out of our reach for ever.

    A solid theory of the ongoings of our brains, would at the same time be a solid theory of how god works, and I just can't see how one would understand something that is bigger than all of us.

    To those who want to explain everything with mathematics, I've always said "make a differential equation that models my soul, then tell me what my favourite colour is". That shuts them up allright.

    We have already shown that there are fundamental uncertainties in nature (Heisenberg), can you be sure that these uncertainties are not divine intervention, simply what really gives us free will. Remember that it's almost 250 years since Darwin wrote his Evolution of the species, and scientists have yet to produce a solid proof that this is indeed how things work. I don't see how we would ever be able to create an entirely autonomus entity (AI) with this in mind.

  • Re:AI...heh (Score:2, Interesting)

    by VanillaCoke420 ( 662576 ) <vanillacoke420.hotmail@com> on Tuesday May 13, 2003 @11:40AM (#5945248)
    I wonder when they'll finally realize that you can't make a thinking machine. It doesn't have a a soul, a consiousness.

    Define soul. What is that?

    It just follows some programming. At the most basic level, it's just a binary program. It follows whatever instructions it was given.

    At the most basic level, our brains are single neurons, which are molecules, which are atoms... etc. down to quarks or whatever is at the bottom. All we are, everything, is simple matter organized in an extremely complex way. Surely intelligence and consciousness can't be the result?
    There's nothing special about us, other than we are very complex structures of matter.

    I honestly don't think we understand what makes a human consious or what makes someone be that person well enough to try to replicate it in software. You can make the logic more sophisticated, but I doubt we'll ever make them truly "think." And even if we did, how could we prove it? If you think about it, how can you prove anyone other than yourself is consious?

    Here I have to agree with you somewhat. It IS a big problem to figure out when a structure of matter is intelligent or consciousness.

  • by jkauzlar ( 596349 ) on Tuesday May 13, 2003 @11:41AM (#5945251) Homepage
    This is a valid assumption, but its not in the spirit of science. During the Loebner prize discussion, one /. poster noted how the AI field is comparable to the airplane industry a little over a hundred years ago. The poster said we should stop trying to build a bird and build something better.

    To paraphrase, we need to stop trying to build a human mind and just build something which does what we want it to do.

    The problem is deciding what we want the computer to do. The Turing test is unreasonable, because we can't make a computer describe its experiences and thoughts in the same way a human can. I mean, if YOU were trapped in a box sitting on someone's desktop your whole life, would YOU act anything like a human? Probably not. I think many people expect to see a machine they turn on and all of a sudden it acts 'alive,' sort of like Frankenstein's monster. I think the AI machine will be more like a baby, where it just spits out nonsense for awhile, until you 'grow' it into something more interesting.

    And, I don't think the AI machine will really resemble a human mind, just as an airplane doesn't look much like a bird. We'll discover algorithms that will approximate the functionality of a bundle of millions of neurons, but obviously, like a plane doesn't maneuver as nicely as a bird, it won't be nearly as flexible as a human mind.

  • by Jerk City Troll ( 661616 ) on Tuesday May 13, 2003 @11:44AM (#5945279) Homepage
    People who do perform illusions and escape tricks have been doing things mostly the same way for decades. Magic tricks may change slightly, but all the basic principles and tricks are the same. There's no real evolution, just adaptation to please the crowd.

    Now, with that in mind, let's look at artificial intelligence. AI has always been about trying to convince an audience that a machine is thinking. This is demonstrated by the very existence of the Turring test and many products (such as the Aibo, Furby, etc) that try to mimick emotions. If the audence is entertained, amused, or convinced, the AI is considered good. Bad AI is when the audience can see right through it.

    Artificial intelligence is magic. It's a trick. It's an illusion.

    It is no surprise then that AI hasn't really advanced. The trades of showmen are practically unchanged for hundreds of years. Razzle-dazzling an audience involves technological advances, but it remains unchanged. Even in the cases where "artificial intelligence" is used to aid in medical diagnosis ("expert systems") or manufacturing are really only following man-made logical structures. The computers aren't thinking, they're only doing what they're told to do, even if indirectly. The end result is impressed people who think the machine is smart.

    Of course, you don't have to take my word for it. If you want to see how badly AI is going nowhere, I hightly recommend reading The Cult of Information by Theodore Roszak [lutterworth.com]. While his focus is not on the fallicy of AI, it covers it in context with the much broader disillusionment of computers by society.

    Now, what does AI need in order to progress? Probably AI creating other AI. Something with a deeper embodiment of evolution. As long as it's man-made, it will never be intelligent, just following a routine. Of course, I am going to stop right here... I am not qualified to offer a solution these obstacles.
  • by Stargoat ( 658863 ) <stargoat@gmail.com> on Tuesday May 13, 2003 @11:54AM (#5945401) Journal
    Let's talk about Intelligence. What makes for intelligence?

    A good argument can be made that a polecat (wild ferret) is more intelligent than many humans. For example, the polecat can survive outdoors with no assistance. The polecat can eat, sleep, have babies, and be more or less comfortable.

    Where does human intelligence come in then? Human intelligence is learned. Of course a polecat at 4 months is more capable of surviving than a human at 4 months. Does this make the polecat more intelligent? But let's try and remember that the polecat is done developing, while the human has about 20 more years until full maturity.

    So the human learns then. Plainly, the human learns more than the polecat over the course of 4 years than the polecat. So is the human more intelligent? I think we can unequivically say yes.

    But what is it that makes human intelligence, and how is it different from a polecats? The answer is learning. But how does learning work?

    Learning is a specific thing. People learn by rote. (Don't let someone tell you otherwise.) It is mimicry that teaches morals. Logic teaches ethics, but logic is learned like morals. This means that, basically, we learn everything.

    The point is, if you think there is any difference between you and a polecat, I would like to point out that there is less difference between you and Alicebot.

    If you want proof, look at how musicans or epic lyrists work. They learn specific phrases and use them over and over. Listen to your own speech or read your own writing. You'll find that you use plug in words and phrases. They'll be similar to your friends and parents, btw.

  • Re:I wonder what... (Score:3, Interesting)

    by studboy ( 64792 ) on Tuesday May 13, 2003 @12:04PM (#5945532) Homepage
    "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'

    Rodney Brooks (who's The Man) said something like "a [working] robot is worth a thousand papers." Instead of a top-down view, subsumption architechture robots have a tight connection to sensing and action, but often no memory. One such robot was able to search out, find and grab empty coke cans, then take them to the trash!

    (semiquote from Steven Levy's "Artificial Life"; highly recommended introduction.)
  • by trg83 ( 555416 ) on Tuesday May 13, 2003 @12:05PM (#5945540)
    This may get a little too philosophical, but I'm going to give it a try. What is the key difference between programming and parenting? You explicitly tell your child what they are and are not allowed to do (sometimes they malfunction/misbehave) and do it anyway. You tell them what emotions are appropriate at certain times. At grandma's funeral it is not appropriate to giggle and laugh. It is also not appropriate to look bored, as we are showing respect for the dead. After going through all the disallowed emotions, that leaves a solemn look and maybe some tears.

    What about pattern recognition? How long do parents spend holding up pictures of various animals or various shapes for their children to identify?

    When it gets right down to it, every one of us has been significantly programmed by our parents, teachers, and government. I am not arguing against the system, just saying that's how it is. I don't believe AI as anticipated will ever truly exist because the degree of creativity and imagination desired exists only in humans either because of an all-knowing, all-powerful creator or millions of years of mutations.
  • by Ella the Cat ( 133841 ) on Tuesday May 13, 2003 @12:09PM (#5945602) Homepage Journal

    Human Level AI's Killer Application - Interactive Computer Games, John E Laird and Michael van Lent American Association for Artificial Intelligence AI Magazine Summer 2001 pp 15-25

    My summary of the above - the AI in games might not be too hot (some would dispute with the academics about that but let it go), but game environments themselves are complex enough to pose a challenge for state-of-the-art AI researchers.

  • Problem complexity (Score:2, Interesting)

    by xsense ( 231699 ) on Tuesday May 13, 2003 @12:10PM (#5945623)
    AI seems to be going nowhere because the complexity of the problem hasnt been thoroughly discussed yet. Computers are designed to be strictly deterministic(otherwise it would be impossible to use them, look at wind*ws).

    When we try to emulate a system with an other system that is different in nature a lot of capacity is wasted.

    That said genetic programming is one of the fields where we actually see truly intelligent solutions to problems completely generated by computers. Problem is the algorithms need computational power beyond our wildest dreams to even be comparable to single cell organisms in ingeniousness.

    After all the nature has had 50 gazillion years to evolve.
  • by 5n3ak3rp1mp ( 305814 ) on Tuesday May 13, 2003 @12:18PM (#5945713) Homepage
    (Bits On Our Mind, an exhibition of some undergrad and graduate computer science work) ...and I headed STRAIGHT for the nematode booth. You see, I had heard that some clever Cornellian had created a simulation of the entire neural network of a nematode. The way I saw it, there was nothing else there that could possibly be more interesting than that.

    So I found myself standing in front of a computer screen. It was a worm swimming through water! In 3D! In real time! After I pushed my jaw shut, I began to ask the genius student some questions...

    "Is that real-time?" "Well, actually, no, that is a 10 second looping clip that took a week to calculate."

    "Well, I see a neural map there. Is that complete?" "Well, actually, no, that is a simplified version of the real nematode nervous system, on the order of about 1 simulated neuron to 10 actual neurons."

    "So you simulate neurons! That's awesome. Let's see the code." (He proceeds to flip through 4-5 pages of very sophisticated-looking mathematical equations to describe the behavior of ONE neuron.)

    What a let-down! No wonder Minsky is pissed, real AI is HARD! :P
  • by YllabianBitPipe ( 647462 ) on Tuesday May 13, 2003 @12:21PM (#5945748)
    We will soon have hardware that has the number of connections or processing power of a human brain. The problem is nobody's come up with the software to run on it. In humans this is what makes the brain more than big organ ... the "soul" if you are religiously inclined. Maybe a human soul can be reduced to nothing more than a program with an enourmous propesity to learn and adapt over years of training / habituation ... say from the years 0 to 18.
  • Actuarial "Racism" (Score:2, Interesting)

    by Baldrson ( 78598 ) on Tuesday May 13, 2003 @12:24PM (#5945789) Homepage Journal
    The most obvious place to start would be a google search for keywords "actuarial" and "racism" [google.com].

    At least humans can get the picture of what is and is not allowed to study lest they draw politically incorrect conclusions, so government-funded academic researchers can be made politically reliable. Can you imagine the hell that would break loose if a genuine AI started drawing its own conclusions from actual data?

  • When AI... (Score:3, Interesting)

    by smittyoneeach ( 243267 ) on Tuesday May 13, 2003 @12:27PM (#5945815) Homepage Journal
    ...starts by modeling the neurons of the brain dircetly as cells (implying a thorough understanding of the proteomics involved) instead of as a neural-net or some other high-level abstraction, perhaps the results will be more interesting.
    Such a model is years off, though, AFAIK.
  • Not so sure (Score:4, Interesting)

    by varjag ( 415848 ) on Tuesday May 13, 2003 @12:28PM (#5945837)
    There are many authors that have written and demonstrated that the brain probably doesn't function as a mass of context-free predicate logic rules -- including my favorite, Hubert Dreyfus.

    Dreyfus argument is old, and its rebuttals are well-known. Consider that symbolic systems are not limited to context-free predicate logic.

    The progress of AI is uncertain, but it is certain that there's no future for symbolic logic AI.

    It is not certain for me.

    Both connectionist and symbolic approaches may succeed if given enough time. However, I think that obsession with neural nets of many people here is of the same nature that obsession of numerous early aviation enthusiasts with wind-flipping devices. Certainly you can mimic mechanics of nature with some effort, but there are usually better ways to do the job.
  • by rpk ( 9273 ) on Tuesday May 13, 2003 @12:39PM (#5945953)
    Yep, logic-based AI is definitely at a dead end. Minsky won't admit it. A lot of AI researchers at MIT went to the Media Lab when they saw the writing on the wall.

    Robots are not a bad thing to work on if other kinds of AI are going to have a chance, because a more holistic kind of AI would recognize that intelligence and cognition first emerged as a function of having a physical body. On the other hand, it's just robotics, it's not AI itself.

    Also, AI was good for the hackers who supported its development on computer workstations. Systems like the Lisp Machine still compare very well to current languages and tools.
  • by Anonymous Coward on Tuesday May 13, 2003 @12:43PM (#5946001)
    What's the difference between thinking and fooling people into believing you're thinking? Does the distinction matter at that point?
  • by ca1v1n ( 135902 ) <snook.guanotronic@com> on Tuesday May 13, 2003 @12:45PM (#5946016)
    Actually, they have made significant strides already in figuring out how the brain works. Check out the Levy Lab [virginia.edu] at the University of Virginia. They've trained a computer model of a rat's hippocampus to do all sorts of intelligent things, such as transitive inference, sequence completion/combination/disambiguatuion, goal finding, etc. While these are not difficult problems for humans to solve or hardcode into a program, the fact that a single network can do these different and sometimes contradictory things represents something that I would call intelligence. As far as I know, they don't plan on having a model of a human brain very soon, since U.Va. lacks NSA-scale compute servers, but even rat-level learning is pretty cool.
  • Re:About Minsky... (Score:5, Interesting)

    by pz ( 113803 ) on Tuesday May 13, 2003 @12:46PM (#5946025) Journal
    Try and name an AI researcher who is not a self-important jerk...

    Oh, say, Rod Brooks, Tomas Lozano-Perez, Hal Abelson, Gerry Sussman, Eric Grimson, Pat Winston, Tom Knight ... all at MIT/AI ... need I continue?

    The difference between Minsky and the rest is precisely as the first poster asserted. Having read Minsky's books, known him professionally and personally, and having taken his course, I must agree that the amount of weight placed on his words are not equal to their value. As others have observed (I forget whom and where), Minksy's original contributions were interesting ramblings at the edge of a new field which happened to pinpoint rich veins of research in some cases, and kill off valuable paths in others (think perceptrons which are, yes, in fact, very useful things, and yes, in fact, do model real neurons reasonably well, and no are not computationally impoverished unless you abide by Minsky and Papert's artifice of only single layers). In otherwords, in some cases, he got lucky, in others he fell flat. This initial success led him to continue pontification (think "Society of Mind", a book of little real contribution), while doing marginally small amounts of actual research. Rod Brooks, in contrast, has made far more, and far deeper, contributions working on his subsumption architecture.

    Minsky's course (at the advanced graduate level) consists of students listening to his musings and ramblings which he often repeats through the term, since he has no syllabus, no agenda, and no apparent desire to teach. When he gives talks, they are all extemporaneous; someone like Churchill could pull that off, Minksy's stream-of-consciousness style keeps his acolytes happy, but leaves those with real thirst for knowledge quite parched. Does this not fit the accusation?

    So what if Minksy thinks graduate students shouldn't be soldering robots? Does that matter? So what if the current AI field isn't following his pet projects, is he making any contributions himself? We've made tremendous strides in AI over the past decade; they just haven't been where Minksy thinks they should be, despite his questionable over-all track record. Exactly why should anyone care that much?
  • by bigpat ( 158134 ) on Tuesday May 13, 2003 @12:59PM (#5946175)
    Seems like AI has progressed about as far as it can go inside the box, only machines that can proactively interact with an environment as people do will learn to think like people.

    Imagine what kind of thing you would be without vision, touch, smell, hearing or the ability to move and change your environment. Without these forms of interaction where would human intelligence be?

    Seems that a Budhist philosphical approach is most helpful here, ie we are our parts, not more and not less. We are what we are. If you wish to create something that is like a human, you should take an inventory of our parts figure out how they fit together and try to find analogous electronics, software and hardware.

    Which is precisely what a lot of the robot folks have started doing. Except that most have started a bit smaller and have modeled insects instead. Finding that they can model seemingly complex insect behavior with simple algorithms and machines.

    Although, perhaps the next best step isn't building real robots at all, which can be expensive, error prone and time consuming, but building virtual robots that can be placed in virtual environments of our invention, somewhat like a "Matrix" virtual reality with intelligent agents that can learn. This approach is more computer intensive, since the environment as well as the agent would require large amounts of computing resources, also, the agent would have to perceive the "environment"

    Seems that many more forms of human nature could be investigated in this way.

  • Re:The Cyc project (Score:4, Interesting)

    by Alomex ( 148003 ) on Tuesday May 13, 2003 @01:15PM (#5946347) Homepage

    This reminds me of a quote from French mathematician Henry Poincare: "just as houses are made out of stones, so is science made out of facts; and just as a pile of stones is not a house, a collection of facts is not necessarily science."

    Applied to the cyc project: a collection of facts is not necessarily intelligence.

  • by jefeweiss ( 628594 ) on Tuesday May 13, 2003 @01:27PM (#5946487)
    It seems to me that the focus on robotics, and insisting computers become good at human thinking tasks is a limited view of what artificial intelligence could be.

    If you were put inside a little white box where you had to flip millions of switches on and off according to certain simple rules, you would look like an idiot next to a computer. A computer can't walk around and recognize things, and doesn't know what an apple is, so what? In my opinion, machine intelligence should be focused on making computers able to make themselves better at what they do best. I'm not sure what a super intelligent computer system would be used for, and I don't think that I would even be able to imagine what would be possible. I would be interested to know what other people think about this idea. Most of the things that I can think of tie back into the "real" world somehow. What would a self-organizing non 3-dimension oriented intelligence be able to do?

    Saying that AI is impossible because computers can't come into "our world" of three dimensions, or understand our literature is kind of intelligence chauvinism.

  • by waveclaw ( 43274 ) on Tuesday May 13, 2003 @01:59PM (#5946894) Homepage Journal
    What they need, then, is for an engineering student to do their masters dissertation on creating a generic physical framework for AI systems, or a computing student to do theirs on a generic simulation environment for virtual AI 'bots


    One of my major projects while at the University of Oklahoma was an Open Source 'AI SDK' - a framework to build and research AI by providing the wheels which had already been invented. Unfortunately, every time I talked with an 'AI' researcher about this I got one of several responses:


    1. We don't need it, my [insert project here] is the True(tm) way - and with the [insert latest breakthrough in computer performance, modeling tools etc] we will win the race!


    2. How dare you think you know enough about [insert project here] to do anything with it? Only my well-paid graduate slave^H^H^H^Hstudents could even attempt it, an only with my special insights.


    3. You don't need all this other stuff like support for [insert other projects]. The SDK will be too big and slow to do anything well


    4. Neato! I'll have a [insert soon to graduate student] to look into it. (Never get a response.)


    These were the kinder remarks I got. I won't go into the phone call I had with an engineering professor who simply ranted for 10 minutes at how CompSci people are all stuck-up theoreticians who can't make anything to save their lives. The truth about A.I. research is that it is a fragmented ivory tower with little fiefdoms rulled by professors with tenure. I've met some really cool people and learned some impressive stuff doing an A.I. SDK (You should see the wall of textbooks you can accumulate.) But, very rarely have I encountered someone who goes to the conferences to talk with their fellow researchers rather than just present the progress of the latest and greatest OneRightWay(tm).


    I still got the sources in CVS for part of the framework, but with the (dis)encouragement I got, its painful to look at the sources without remebering all those disapointments...

  • Re:MOD PARENT UP (Score:4, Interesting)

    by rickwood ( 450707 ) on Tuesday May 13, 2003 @02:11PM (#5947055)
    Just guessing here, but I believe the grandparent post was refering more to the work of people like Douglas Hofstader [indiana.edu] than work being done on artifical neural networks. I can highly recommend Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought [indiana.edu] as an introduction to this branch of AI research.

    I get the impression though that many AI researchers see Hofstader as a heretic. That's too bad because I think the ideas he and his team have developed hold more promise than any other approach to AI currently extant.
  • by jungd ( 223367 ) * on Tuesday May 13, 2003 @02:28PM (#5947263)

    As an AI researcher and someone who's read Minsky's books and listened to him talk - I can say that he doesn't know what he's talking about. He was big in his time, but things have moved on and he hasn't. He is an old, pesimistic, armchair AI 'researcher' who still thinks AI is easy. He doesn't understand why AI needs to be embodied and situated.

    Having said that, I do agree that AI is almost going nowhere (anyone can see that). But I don't believe Minsky understands why.

    Those 'stupid little robots' are the best thing to happen to AI - unfortunately most AI 'researchers' don't really understand what they're doing. Consequently, 97% of the time and effort purported being spend on AI research, isn't.

    With a few exceptions, the main reason for the 'advances' we're seeing in AI/robotics now, is that algorithms are riding the wave of advances in computing power.

    My guess is that you'll see most of the advances in AI coming as more and more 'real scientists' from other disciplines - such as ethology, biology and neurology - get involved in it.

    Keep in mind that this is my opinion - shared by an increasing number of people in the field, but still a small minority.

  • by virtigex ( 323685 ) on Tuesday May 13, 2003 @02:54PM (#5947565)
    Speech recognition did not come out of AI initiatives. After expensive and fruitless attempts to apply AI techniques to speech recognition, researchers with statistical and signal processing backgrounds applied made substantial progress on speech recognition during the 1990's. The core search algorithm came from ones that modems use to extract digital signals from noisy signals.

    The speech recognition community also investigated techniques using neural networks, although they did not produce a clear vin over the statisical technique called hidden Markov modelling.

    AI techniques, such as espoused by Marvin Minsky, routinely completely failed when presented with anything approaching a real-world challenge.

    IMHO the AI investigators who have a hands-on approach to making robots deal with real environments are the only ones who are likely to pull out AI's reputation of unusable results.

  • by axxackall ( 579006 ) on Tuesday May 13, 2003 @04:54PM (#5948862) Homepage Journal
    Minsky's right: AI as science has died, not because it was impossible to improve the theories, but because it wasn't making any money.

    The real money will begin to flow once the humankind will stop being scared of direct integrating of humans into computer networks.

    I am not sure when, but ultimately all keyboards, mice and screens will take their places in musiums. People will communicate with computers and each other by connecting computers directly to their brain. Thus, the solid knowledge of natural intelligence will be required.

    I think first researchers are already working on it in military-sponsored labs. Of course volunteers realize that they can be seriously damaged or dye, but death is natural in military industry. Military industry operates with huge amounts of money. But that's often not exactly a "free" market - all contracts are signed through lobbiing and bribes.

    Once first "Unisolders" will be available on the market (sorry, on the job market), then next to militaries there will be strong demand from real-time traders. And that's a real market. Traders will line up to make a neuro-surgery to be connected to those-days electronic stock markets.

    I am not sure when such "UI" will be available on the market, but once it will be there, at some point geeks will buy it. The rest of us will be in the front of the tough choice: to stay 100% "natural" or to win a better job contract.

    Now, where is AI? The answer is simple: ultimately there will be nor AI neither NI (N as natural), there will be SAI: Semi-Artificial intelligence. No need to think in English letters if UI can get concepts you think of. No need to count numbers if software can do it for you *AND* some AI can do reasoninig about when, why and how you want it done. The trick is that no need to automate the reasonining 100% as your brain is already connected and can do part of the job in that reasoning.

    For example, no need to create a very complicated DB query as SAI can use part of your brain to post-filter a small set of data after the pre-filtering of a big set data is done automated in DB engine.

    Many problems of software development can be solved if, in addition to humans using computer, computers will use human brains.

    That's what i call SAI.

  • by DrMorpheus ( 642706 ) on Tuesday May 13, 2003 @05:09PM (#5949012) Homepage
    Although I'm familiar with the Chinese Room argument I haven't read Searle's and Block's discussion. I have, though, argued with other researchers about this topic and the most vigorous defenders of the "intelligence is innately biological" argument all end up sounding like Vitalists [cod.edu].

    The nineteeth century debate between two camps of biologists, "Vitalists" and "Mechanists," is very similar to the debate between those who think machines can eventually have intelligence and those who think only biological systems can possess intelligence.
    Vitalists believed that living beings had something more than their physical and chemical composition which differentiated them from non-living matter. This difference was a "vital spark" or elan vital which made them innately different from ordinary or "dead matter." Their opponents, the "Mechanists" believed that living things were essentially no different than non-living things, at least in terms of what they were composed of. That there was no "vital spark" which separated living and non-living things but rather only a difference in their physical and chemical compositions.

    Obviously the "mechanists" won since no modern biologist believes in the elan vital.

    In a very, very similar fashion, Minsky and his supporters seem to be making the same type of argument. They seem to want humans to still have a "soul," called intelligence, something that "dumb" matter can never have. Whether they argue for a mysterious quality that only biology systems seem to possess or for mystical "quantum processes" that seem to only take place in brains and not in machines I still call this vitalism and I don't think its scientific at all. It's more like an intellectual retreat to defend some deep seated emotions about humanity's place in the Universe.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...