AI Going Nowhere? 742
jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
Will we ever have *real* AI? (Score:2, Insightful)
What use is AI without an operating platform (Score:5, Insightful)
I understand his frustration in general progress. But, those grad students are building a strong foundation for their later work that may very well meet the goals he is espousing. No need to have design flaws in implementation down the road because the engineer wasn't properly educated in physical design as well as logical design.
-Rusty
About Minsky... (Score:5, Insightful)
AI (Score:2, Insightful)
Grad students having fun (Score:3, Insightful)
So hire and pay them money so they do real research instead of having fun. Otherwise quit your bitchin'. I personally think building stupid robots is cool.
Intelligence isnt the problem (Score:5, Insightful)
reply to MYCROFTXXX@lunaauthority.com
Biology First (Score:5, Insightful)
Re:Will we ever have *real* AI? (Score:2, Insightful)
The same thing could be said about you.
AI...heh (Score:2, Insightful)
I honestly don't think we understand what makes a human consious or what makes someone be that person well enough to try to replicate it in software. You can make the logic more sophisticated, but I doubt we'll ever make them truly "think." And even if we did, how could we prove it? If you think about it, how can you prove anyone other than yourself is consious?
Thats a load of rubbish (Score:5, Insightful)
Building any form of AI system is not easy, but copping out of it by building toys is not the answer. We already have platforms for AI; they're called line terminals. Things like pattern matching do not require a fully autonomous robot, after all.
Minsky is right; whats new to come out of actual AI research in the last 30 years?
Re:Maybe the problem is Minsky himself? (Score:5, Insightful)
Now Minksy, never wanting to admit his life's work has been a dead end, comes out saying that it's all these other researchers working in other directions that are at fault for there being no progress. I imagine he believes that if only they'd all climb the tree with him, the trip to the moon could really start.
Re:Will we ever have *real* AI? (Score:3, Insightful)
Will we ever have *real* artificial pictures? (Score:2, Insightful)
disappointing (Score:5, Insightful)
This is the most interesting comment to me. Because we understand the nature of the process that produces supposedly 'intelligent' results, (and we don't understand the same process in ourselves), we perhaps rightly just view the resulting system as just an application.
Seems like Minsky is throwing all his toys out of the pram because he doesn't want to admit to what everyone else has been saying for a while: that whether a computer can think is at best an astonishingly difficult question to answer and at worst meaningless. I'm a grad student who's just spent a year looking at computational linguistics and semantics (amongst other things), and the most debilitating restriction on the semantic side of things is the problem of so-called 'AI-completeness', which essentially says that if you solve this problem you have a, externally at least, thinking computer. Really simple things like anaphora resolution are AI-complete in the general case. If we could have solved this problem by now, I think it's fair to say we would have done, given its massive importance. However, we know that the brute-force case is ridiculously intractable, and we can't figure out how to do it any more cleverly. Roger Penrose argues that this is due to the fundemental Turing-style restrictions that we place on our notion of computing. Until we get a paradigm shift at that level, we're likely never to solve the general case.
And I'm sure that Minsky is aware that attempts to solve constrained domain inference and understanding have been taking place for a good long time now. I just don't see why he's so upset that the field of 'AI' (which is a nebulous catch-all term at best) has shifted its focus to things that we stand a cat in hell's chance of solving, and that have important theoretical and practical applications (viz. machine learning). Replicating human thought is not the be-all and end-all, and you can argue that it's not even that useful a problem.
Robots, though, I agree with. Can't stand the critters
Henry
Two unknowns dont make stuff work (Score:4, Insightful)
I very much enjoy the works of Markram and Tsodyks. What they mainly analyze is how two nerve cells can interact with each other. They showed how they change their connection weights and how the timing of spikes, nerve impulses, affect how neurons connect to each other and how they transmit information.
While these studies tell us a lot about the underlying biology they do not tell us what these modes of information transmission are used for. For years it had been known that synapses have complex nonlinear properties. Biology pretty much does not constrain what functions neurons compute.
Thats why I do not believe that such studies will bring us nearer to real AI anytime soon. The algorithms coming from these systems are severely underconstrained. A lot of modelling has followed the pioneering works of Markram and Tsodyks, one of them being Maas. All these algorithms are very fascinating and might yield insight into the functioning of the nervous system.
The algorithms however are lightyears from being applicable to real world problems. The field of AI is old and in some sense quite mature. None of the "biologically inspired" algorithms today can compete with state of the art machine learning techniques.
Re:Will we ever have *real* AI? (Score:5, Insightful)
* One might claim LTP or LTD as some sort of neuronal knowledge. Ok, that's fair, but my point stands if you apply it to the building blocks of neurons. Do ion channels "know"? Do amino acids? It's turtles all the way down.
Sour Grapes (Score:5, Insightful)
Re:What about my AIBO? (Score:4, Insightful)
to an extent yes it has decent pattern recognition. can it pick you out from the rear? no. side? no.
Can it simulate and fool you into thinking it is showing emotions ? yes. is it anytihng but an expensive toy? no.
the Abio is amazing, but it hardly does what people think it does. and that is the key with the abio.. it does alot of things that fool humans quite well.
it will get better, but it is hardly near AI material.
Get AI moving with open source (Score:2, Insightful)
Using open source development, a project to establish a tool kit for AI programming fundamentals could be born. It'd definitely be cool to have something like that available. I'm not sure if MIT has anything like this going yet, but they could easily whip up the brain power to get it started (and started right).
But Robotics Must Precede AI (Score:4, Insightful)
Trying to program intelligence purely with software puts the researcher at a disadvantage, since even the most fundamental rules and attributes of things (fire is hot, water is wet) have to be explicitly entered as constant variables.
Once robotics advances to the point where mobility, vision, and speech recognition can be taken for granted, then AI can be programmed as an add-on.
Body first, mind second - That's how animals evolved on this planet, and it's how, I believe, Rodney Brooks approaches this field.
Re:Thats a load of rubbish (Score:5, Insightful)
If a human (or any animal) were left to grow with no senses and no method of communication (or the most very basic input/output, analogous to your line terminal), what sort of intelligence would develop? Probably nothing very coherent.
BTW, AI is most certainly not pure math.
What is the purpose of AI? (Score:3, Insightful)
AI today has nothing to do with intelligence. Its all basically rule-based procedural programming. While this allows us to make some really neat applications like automatic vacuum cleaners and pool scrubbers, it has nothing to do with "intelligence".
The human mind is not rule-based -- we impose a framework of rules to allow everyone to live together in relative harmony. The core of our being -- how our mind actually works -- remains an absolute mystery.
Pot, meet Kettle (Score:5, Insightful)
Yeah, much more shocking than the -- decades -- he (and others in the 'hard AI' camp) have been spending? They've made oh-so-much more progress, haven't they?
Rodney Brookes made more progress with his robots in the early nineties than the whole hard AI camp did in 3 decades. I remember seeing a documentary once comparing this huge robot which used a traditional procedural program to navigate through a boulder-strewn field. It took about 3 HOURS to decide where to put its foot next. Meanwhile, little subsumption architecture-based robots were crawling around like ants, in real-time. (Oh, and some of them had to learn to walk from first principles every time they were turned on -- only took about half an hour!) That's the most damning evidence of the failure of hard AI I can think of.
As others here have said, what good is a brain until we get a useful BODY working? Manueverability and acute senses are a must before an artificial intelligence can do anything useful, or learn from its environment effectively.
Great Minsky Quote (Score:3, Insightful)
Minsky only has himself to blame. (Score:4, Insightful)
Promising subfields like perceptrons were intentionally quashed by him... he went out of his way to strangle investment and research in areas he considered to be a dead end. We're not literature majors: we can't just all say the same thing in a party over wine and cheese and call it progress.
Even bad ideas, when well explored, can give new meaning and better approaches to a field. And since this is research, noone knows the correct answer: even a dumb-seeming idea may turn out to be the right one-- or give us clues about features the right answer needs to have.
Of course we've had major advances in AI. One of the challenges of AI, as the article points out, is that once something is well understood, it is defined as being outside the AI field. Computer vision, face recognition, voice and speech recognition. Conversation engines like SmarterChild. No, this isn't HAL, but they are good, positive steps in the right direction.
Re:But Robotics Must Precede AI (Score:3, Insightful)
"Body first, mind second" sounds nice, but without reproduction and a mechanism for evolution you're not doing anything but creating an environment for your AI to interact with - you're not creating the pressures that caused evolution of intelligence in nature. So why go through the trouble of mechanics when you can simulate environments that are much simpler and easier both to interact with and to understand?
Re:What about my AIBO? (Score:3, Insightful)
MOD PARENT UP (Score:5, Insightful)
I am an AI researcher and the parent poster is speaking truthfully.
The main challenges in AI at the moment do not concern building the physical robots -- e.g. a piece of kit on wheels with IR sensors or such things.
The main challenges in AI concern applying some very complicated math to solve problems like pattern recognition, density estimation and other forms of machine learning.
It seems to me that a large number of AI PhD students spend their lives tinkering with the mechanics and electronics of the robots that will ultimately be used to test their algorithms. This is wasted time; a good electronics graduate should be able to do the tinkering, it shouldn't require a prospective AI PhD student to do it.
I can see the point in the PhD student learning a little about the hardware that they want to run their algorithms on (so that they know the limitations and common problems with real hardware), but they should not spend all their time doing that and wasting the opportunity to spend their time contributing to their field (i.e. AI, not mechanics or electronics).
That said, many AI labs do not have the funding to be able to pay full time hardware technicians, so in many cases the PhD student *has* to do the tinkering
Re:Biology First (Score:5, Insightful)
And artificial intelligence doesn't necessarily have to reflect human intelligence.
Minsky is dangerous (Score:5, Insightful)
Before you yell flamebait or troll, let me explain.
I have been following the progress of various AI technologies, including neural nets and adaptive logic networks, for many many years now. Years ago perceptions were first developed and it was shown that they could learn simple patterns. Perceptrons were basically two layers of software simulated neurons. They worked, and researchers were fascinated and worked on them regularly.
Minsky, being the "highly regarded" and "leader" in AI, wrote a paper that proved that these perceptrons could never learn more complicated patterns, and threw a bunch of math at the reader. So people stopped. After all, there was a mathematical proof that perceptrons weren't going anywhere. Research skidded to a halt for decades because of Minsky.
Of course, then someone developed the (gasp!) THREE layer perceptron/neural net and sure enough with the right formula it could learn much more complicated tasks.
Minsky, in my opinion, does this regularly. The problem is, that he has a reputation in the industry as being a leader (I'm not sure why).
He's already lost us two or three decades of research because of his "leadership" -- I am terrified that he might cost us more development into the future.
Where could we have been if Minsky wasn't always going around half cocked, screaming that he is right? "Robots are useless!" is history repeating itself and him trying to get more press. Keep developing guys, just ignore the peanut gallery. There's always someone who says it can't work (ahem, Minsky) -- it can and it will.
Re:Will we ever have *real* AI? (Score:2, Insightful)
A major line of thinking in AI research, is not to try to 'mimic' the *way* the human brain works, but instead to focus on realizing the same functionality, ie. giving machines/computers 'intelligence' or the ability of 'learning by example or from experience'. This includes the ablity to handle unseen/unknown situations, inferred from seen examples...
I wonder what... (Score:3, Insightful)
About 20 years ago AI was going down the craperroo until folks like Brooks decided that the AI field would be better served by moving it from the more theoretical GOFAI method to a more applicable style. Revitalized everything.
Re:Biology First (Score:2, Insightful)
Actuarial Lobotomies (Score:2, Insightful)
The first real progress in AI will come from someplace like a grey-market reinsurance network hiding out from the "regulators".
Re:What about my AIBO? (Score:5, Insightful)
i agree, and i find exception also. (Score:1, Insightful)
"Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines."
My postgraduate work was in artificial intelligence, don't ask why.
There is no doubt that Dr. Minsky is both very correct, and very wrong. All in the same sentence.
He's very correct about the leaps of understanding in the area A.I., its dismal. There are no "C3PO's"; there is no "Lawn Mower Man". After forty plus years, you'd think there would be a group of people with the mental 'cajones' to pull these projects off. Well, where are they?
But the real world applications of subsets of applied A.I are everywhere today. One only need start their new car, or do their laundry in a new washing machine to see something called fuzzy logic at work. For the great unwashed out there, fuzzy logic is a computers 'guessing' algorithm. A more cruel definition is, 'the mathematical joint probability'. There are plenty of examples of 'learning' by computers; this is done using 'neural nets'. Credit Card Company's can use 'reasoning', or 'truth tables' to determine if something doesn't seem right.
But as Mr. Carl Saigon stated, "take baby steps". And I believe we have no choice, we are taking baby steps.
Re:What about my AIBO? (Score:5, Insightful)
My dog can do the same thing.
Re:Sour Grapes (Score:5, Insightful)
He's right. Theoretical work has ground to a halt in the U.S. Universities have succcumbed to business-oriented research to make money -- although, given all that cash, it's amazing that tuition keeps skyrocketing.
Government is now pretty much owned by corporate people, and they aren't metering out grant money to loing-haired theory about AI. Grant-driven University research is no pretty much a free source of corporate R&D.
Minsky's right: AI as science has died, not because it was impossible to improve the theories, but because it wasn't making any money.
Re:Maybe the problem is Minsky himself? (Score:4, Insightful)
But the real AI comes when you create a 'stupid' system that is able to become smart through learning and training. He feels dissapointed because nobody (or almost nobody) is focusing in this direction.
You have smart toys ala Aibo, and smart systems ala Eliza, and a lot of people is working towards creating smarter toys and smarter systems, but the real breaktrough will come when somebody manages to create the dumbest system possible.
Re:Will we ever have *real* AI? (Score:4, Insightful)
That's what Minsky is getting at. Few people are working on that problem.
Research talent in universities seem to be striving for business solutions. But IMO, such research should be primarily done by businesses, not AI labs. Universities should create new science.
Re:AI...heh (Score:3, Insightful)
1. consciousness is just an abstraction (like that actually means anything)
2. prove you have a soul (irrelevent)
3. you're just a machine, you dolt! (doesn't explain why he's aware of himself, irrelevent)
4. more along the same vein
Now, I'm not discounting those arguments, just pointing out that they are completely uninformed. What I mean by that is, no one knows anything about souls, consciousness, etc. We understand that our brains are extremely complex information processing machines, but that doesn't help to explain why we are aware of ourselves.
Perhaps there is something we haven't found yet that makes sentient beings sentient, and perhaps not. Perhaps it is a result of something so complex that the human brain is incapable of comprehending it. Perhaps it is something we can't ever physically detect. Perhaps consciousness is something that is pervasive throughout our universe, throughout all matter, and our brains are a physical machine that links a bunch of it together.
My point is simply that we haven't even begun to understand consciousness in any way shape or form. People who say it's a result of a soul, or it doesn't really exist, or is the result of a complex thinking machine, are all deluding themselves. At this point, there is simply no way anyone can seriously speculate about it. We don't even completely understand the ways in which physical matter interacts in our universe, nor whether what we know as physical matter is all there is that is here.
It's beyond us right now, and is likely to remain so for a long time.
Oh, and BTW, The Matrix(TM) has you! :)
Re:Minsky only has himself to blame. (Score:3, Insightful)
The original poster probably meant neural networks more than perceptrons. By the time his paper was published (1969) neural networks were far more advanced than a simple perceptron, and had easily overcomed the linear separation problem. Some people claim he along with Papert engineered this paper to get a juicy DARPA grant that was just about to be assigned. His paper efectively killed research in this area for almost 20 years.
Minsky has always been a bit of a weasel and knows very well how to pull the strings of power (cash flow) to favour his research and grants. This last statement of his does not come as a surprise to me.
Smarter Bugs... (Score:1, Insightful)
From my limited vision of what has gone on up in Boston, I think I can understand his frustration when put into the context of a presentation I attended about 10 years ago by the media lab. I think it may have been Mr. Negroponte.
The first comment he made was that we would always be disappointed with the growth of technology when viewed over a 5 year frame of reference. He used desktop computing as an example. He then stipulated that we would more likely be amazed at the growth in technology over a 10 year frame of reference. He again indicated the desktop PC and the 10-year anniversary of the IBM PC.
The rest of the conversation had to do with what to expect (or perhaps be disappointed in) over the next 5 years.
His first stipulation was that we would all have machines with 1000 MIPS running on our desktops. He then went on to speculate what we would possibly do with such powerful machines. The answer was that at 1000 MIPS we would have surpassed the boundaries of OS and software to create systems with massively parallel adaptive agents. Thinking machines. A computer that would have an anthropomorphic interface that would quickly and to the user effortlessly adapt itself to the needs and uses of the user.
Well now its at least a decade since I saw this presentation. Ive used loaded and poked at a few versions of MS, OSX, Linux, and Irix. My older machines run 400 MHz, but most are in the 2+ GHz range. Cant say that Ive seen too many signs of adaptive agents, let alone anthropomorphic poetical (except in modeling dumbass behavior) in much of the software Ive seen lately. Am I wrong?
Mr. Minsky observed bug like behavior in his robots decades ago. I think here lays the source of his frustration. Several quantum leaps in technology later, and all we have to show for it is slightly smarter bugs.
Re:MOD PARENT UP (Score:4, Insightful)
In most cases, the hardware and its limitations can be simulated. The only reason that most robotic AI projects are embedded in hardware is because it makes good eye candy for the science press, funders, etc. If you have a good simulation of the environment and the platform, you no longer need to build the hardware for AI research to proceed.
Also, why does one need to build new platforms each time a project ensues? Many robotic components could be reused so that only processor boards, motive actuators, or sensors would need to be updated. The reuse of firmware would cut down on the amount of programming time, as well. I think a good MS thesis would be to develop a kind of common robotic architecture along with a simulation testbed. This would allow the AI researchers to get back to work and only do last minute tinkering.
Critical? (Score:5, Insightful)
People just accept it, and progress is delayed.
Why is it his fault that there are so many followers? If anything is to be blamed is that these researchers just blindly follow whatever he's saying rather then take a good critical look at what is going on.
If his math and theory "proved" that an area of AI was a dead end, and it wasn't, his math/theory was wrong. It is a sad state when nobody dare challenge the status quo.
Minsky is in his own little world (Score:3, Insightful)
I suspect part of Minsky's problem is frustration that his ideas about AI aren't bearing fruit, so he's going to take it out on other people's different approaches. It's not much different from the Perceptrons paper he co-wrote in the late 60's that nearly killed Neural Network research for most of the next decade. Never mind that there was plenty of useful neural network research to be done that avoided the failings of the perceptron model.
In my opinion, if we had the wholistic understanding of intelligence that would let us use Minsky's type of approach to AI, there wouldn't be anything left to do but implementation! One cannot just a priori assume all principles of intelligence by self-examination, and that's where he fails. There are interesting things to learn in that approach, but a large number of them have already been learned, so people are turning to other means (bottom up approaches focussing on self organization are doing well and leading to new discoveries) to get a broader understanding of what is involved in intelligence.
Just because Minsky has sour grapes doesn't mean that the robot people aren't doing useful research.
Re:What is the purpose of AI? (Score:3, Insightful)
So, when you see a cliff, you blindly walk off it? After all, there are no rules disallowing it, and it's not clear that anyone else's "relative harmony" would be disturbed by such a thing.
Humans have many rules that they create and live by themselves - many having to do with self-preservation, self-actualization, and motivation - that have little to do with you simplistic explanation of "no internal rules".
AI winter II ? (Score:5, Insightful)
"AI winter" [216.239.51.104] is the name given to the collapse of strong AI as a business model in the mid 80's - expert systems and symbolic AI in general didn't deliver on their promises, and so the money went away. As a guy who got his doctorate in AI in 1985, I can tell you all about it.
One of the major causes of AI winter was researcher hubris - lots of people hacked up systems that appeared to solve 80 percent of certain complex problems and then said "all that stands between us and a complete solution is money and time". For many of those systems, solving the last 20 percent would have taken 2000 percent of the time, if it could have been done at all. The tragedy of AI winter, though, is that basically all of symbolic AI was abandoned, though some of it is creeping back out into the light with obfuscated syntax (see my
What Minsky sees here is a lot of people heading down the same path, but with neural nets and small robots instead of expert systems. The new systems are doing some interesting things relative to the old symbolic AI systems (though they do have the advantage of 20 years of Moore's law to help them). But, will they scale up? Right now, nobody knows. If they don't, the last thing the field needs is another cycle of overpromise/underdeliver/abandon.
Maybe AI is just plain hard, and cracking it will take longer than one or two computer industry business cycles.
Re:MOD PARENT UP (Score:2, Insightful)
Marvin Minsky is an idiot (Score:4, Insightful)
Minksy belongs to the old school of AI thinking. These guys believe that it is possible to make statements about intelligence itself, without considering the interactions of the organism/agent with its environment or the underlying architecture of the brain/CPU. I think that the total failure of this style of thinking to produce anything interesting in 50 years proves that this approach is sterile. Minsky laments the fact that graduate students build robots, but this activity exposes students to the challenges of constructing a device that must actually interact with the environment. It is ridiculous to assume that you could design a system capable of intelligent behavior without ever confronting the problems of sensors and actuators. Almost every part of the brain is devoted to processing raw sensory input or generating motor output. One cannot simply design an intelligent system without worrying about sensory input and behavioral output. The CYC project of Lenat has the laudable goal of teaching a machine "common sense" by hard coding a vast database of simple statements like "Trees cannot walk". This is a totally wrongheaded approach to learning and reasoning, and is typical of old school, hard AI.
We will only make progress in engineering intelligent, adaptive systems by studying actual examples of intelligent, adaptive systems, namely animals. Neuroscientists and psychologists are beginning to embrace the tools of mathematical modeling and simulation wo help explain nervous system structure and function. Computer scientists would do well to similarly embrace the work of experimental neuroscience.
Minsky is a dinosaur.
Marvin Minsky has no clue (Score:3, Insightful)
Do not be naive (Score:2, Insightful)
No, it hasn't.
Those who know what we're doing just aren't advertising it, and on the most part we have to wait for the raw aggregate processing power of readily available computer technology to reach much higher levels before we can put our theories to nontrivial test.
Consider the raw data complexity of the human brain: About 30 TB of synapse states, about 10% of which is actively being read and applied to change the states of other synapses at any given moment, at a rate of up to about 1000 times per second. 30 TB *
If we assume that someone's working theory of sentience requires levels of data and data processing comparable to the human brains', it's going to be several years before it's feasible to put together a computing cluster with that much aggregate main memory bandwidth, and a few years more for the nodal interconnect, even with SpringOS-style duplication of information across the network. Multiple 100 Gb interfaces per node at least.
Right now, the optimal behavior is to sit on our pet algorithms, read up on the progress of others, and try to make lots of money while waiting for commodity computer hardware to get a couple orders of magnitude more powerful.
I switch off between hoping the world doesn't bomb itself back to the stone age before then, and hoping that it does -- there's really no way of knowing whether a successful artificial sentient is going to be our benefactor or a monster, despite the best efforts of its keepers.
Re:What's the difference? (Score:4, Insightful)
The same difference between cheating on a test in a subject you've never taken and actually knowing the material: as soon as you're asked a question that isn't on your crib sheet, the charade is over.
An AIBO has a pre-programmed set of behaviors, and any stimulus it isn't programmed to respond to will not have a realistic effect. The same is true of Loebner Prize winner Alice -- the only difference is the size of the pre-programmed response database. Ask it something it's never heard, and it will choke.
When these AIs are able to produce convincing responses to new stimuli, then I'll say that the difference between "fooling" and actually thinking has become irrelevent.
but robot are the real goal (Score:2, Insightful)
We already have lots of smart people all over the place - what we need is smart robots that can do things that people can't.
Imagine if you could get a whole slew of robots to sort a landfill into elementary components. Imagine if you could get robots to put out fires and rescue people. Imagine if you could get robots to sew any garment you wanted at the download of the latest fashion trend. Just Imagine!
Without extremely advanced senses and mechanisms and the all important control of those things robots will never be able to do these things. Marvin Minsky is right in that those graduate students shouldn't be spending 3 years just getting the machine to work. They should buy the robot and spend 3 years programming it and outfitting it with new sensors. Robot companies should be more common. But the robot market is still in its infancy. Once it gets jump started, it'll be brilliant.
Re:Software is behind, not hardware (Score:3, Insightful)
So i'll have to disagree that the problem is software, technically if you had infinite processing power you could have something that resembles a human with nothing more than time. Infact if such processing power existed, the time it took such a system to learn would be shorter than that of a human. Especially if it was receiving verfied factual input on a consistent basis, it'd also be nice if it had an everlasting memory which means it would remember something 65 years from now as if it just learned it.
Alot of people play with this same idea in there head day in and day out. Personally, I believe the answer is somewhere between adopting a brain for processing usage and using quantum mechanics to come to the "singularity". This could be right around the corner, or it could be years away it all depends on where the money is spent. Personally robots are a step but not in the right direction because they simply don't address the problem.
Re:Software is behind, not hardware (Score:2, Insightful)
Anyways, i do agree in part with you. I think that software is behind hardware, but since hardware is atleast several decades off, software still has plenty of time to catchup, all they need is a break through or 2.
Re:What about my AIBO? (Score:5, Insightful)
It doesn't work this way, and yes, there is a difference. Having an outward appearance of intelligence is not enough to show intelligence. Read Searle and Block's discussions on the Chinese Room argument -- it's a fascinating and eye opening read (I think it was Block that -- quite convincingly, IMHO -- makes the case that most of our intelligence is innately biological, and "strong AI" not even possible with what we know today).
IMHO one of the problems with AI is that we don't even know what human intelligence is, and until there is a fundamental advance (not technological but in our understanding of our human/biological mind) then it seems to me the most we can hope for are machines that mindlessly ape intelligent behavior, but are not intelligent in any but very superficial ways or by very loose definitions.
Something that mimics the outward appearance of intelligence is a far cry from what, hopefully we'll be capable of in the (probably still distant?) future.
he is half right (Score:2, Insightful)
what many people do in the robotics community is not AI, which is considered crap by many. just trying to get a robot to localize itself in a room is hard and is not and AI problem, but a complex application of statistics and other mathematics for making use of sensor data (from sonar, radar, cameras, lasers). unlike minsky there are some people out there who know what good engineering and science is, and it doesn't include "AI".
Re:When AI... (Score:3, Insightful)
This is a very difficult and interesting problem. I do not mean to diminish the hard work of any researcher in this area.
However, when (not if) this is understood, I think we will be treating it as a brute force solution. We will not likely be replacing pocket calculators with emulated brains to help us do long division. A computer chess program will probably still beat an emulated brain. The human brain is well adapted to its environment as a result of millions of years of evolution. However, it is not nearly the optimal solution for a great many number of problems. For example, is the best doctor or programmer necessarily a human or emulated brain?
Those solutions are just as interesting.
Re:What about my AIBO? (Score:2, Insightful)
I've read it and must disagree with the conclusions put forth. IMO, the "intelligence" in the Chinese Room conjecture is not the person executing the rules. The intelligence is the rules AND the person (i.e. both together, not one or the other individually).
Remember, they were out to prove that intelligence and awareness is something special (IMO, that means the same thing as mystical and is no different than believing in souls, ghosts and spirits).
You also wrote: "the most we can hope for are machines that mindlessly ape intelligent behavior". Actually apes are VERY intelligent (relatively speaking). It has been shown that chimps can understand abstract concepts such as models (i.e. this thing represents that thing) and live in culturally rich societies. Hell, even dogs dream.
Intelligence and self-awareness are not on/off states. They exist on a spectrum. How self-aware is an ant? How about a large spider? A shark? A dolphin?
AI will not happen via a breakthrough. It will happen through slow incremental steps. We'll barely notice it and people will be complaining that we don't have "real AI" when some machines are smarter than us. Whatever that means.
Classic Minsky vs. Brooks (Score:3, Insightful)
This is a classic battle between Minsky and Brooks. Heck, we had the same battle in our labs (not MIT). I believe that the Brooks response is along the lines of "sure you'll take an extra year to graduate with me, but you'll have one hell of a demo tape." I agree with Brooks. I still show people videos of one of my robots years later. I've never shown anyone any of my simulated robot work afterwards.
Re:What about my AIBO? (Score:3, Insightful)
Re:What about my AIBO? (Score:1, Insightful)
IEEE standard robotic interface? (Score:1, Insightful)