AI Going Nowhere? 742
jhigh writes "Marvin Minsky, co-founder of the MIT Artificial Intelligence Labratories is displeased with the progress in the development of autonomous intelligent machines. I found this quote more than a little amusing: '"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."'"
What about my AIBO? (Score:3, Interesting)
Maybe the problem is Minsky himself? (Score:5, Interesting)
google cache (Score:2, Interesting)
For REALLY good insight check out Nick Bostrom's articles on Super Intelligence here: http://www.nickbostrom.com/
Well... (Score:3, Interesting)
I'd consider that pretty much intelligent, compared to some people I know. Then again, some people I know can hardly be described as sentient, let alone intelligent.
Re:Will we ever have *real* AI? (Score:5, Interesting)
Define "real, true intelligence"
> You can try to simulate that, but so far
> simulation consists of what amounts to a
> gazillion 'if' tests
That's what tradiditonal AI school is doing. Yes, you are correct. It won't go anywhere. On the other hand spiking neural networks are very promising. Search google for "liquid state machine". These researches are making progress novadays, not Minsky.
Re:Will we ever have *real* AI? (Score:5, Interesting)
Disappointment with AI (Score:2, Interesting)
Re:Maybe the problem is Minsky himself? (Score:4, Interesting)
Re:AI is going wherever it wants (Score:2, Interesting)
Roelof
Re:Will we ever have *real* AI? (Score:3, Interesting)
The progress of AI is uncertain, but it is certain that there's no future for symbolic logic AI.
Re:What use is AI without an operating platform (Score:4, Interesting)
Re:Hrmm (Score:3, Interesting)
It drives me crazy that people are so concerned about possible technologies, that they want to "slow down and think about the consequences of xxx".
This is really just unfounded fear. While we still don't know if something is possible, is not the time to worry about what problems we can concieve that it will bring. Knowledge is more important than worrying about some issues that may or may not arise if we are able to do something. It is good to ask "If we cause this atom to split, will it kill us?", but I do not think there is any value in saying "Maybe we shouldn't find out what happens if we split this atom, because if it causes an explosion, someone might use that knowledge to build a bomb..."
One of my favorite quotes is from Isaac Asimov:
I'm sure a lot of people will disagree, but to me, knowledge is most important.
Re:AI...heh (Score:1, Interesting)
If you think about it, how can you prove anyone other than yourself is consious?
Taking that further, how can you know that the reality you percieve is even real?
Ahhhh, the sound of neurons frying..
Those are just state machiens. (Score:2, Interesting)
But I'm not an expert, and that's just my personal opinion.
Old guard moving out (Score:5, Interesting)
He comes across as affable but bitter. I found it strange that though he cointually complains about the leadership of the AI lab, he and his protege Winston were in control of it for some ~30 years without making any groundbreaking progress. In fact, Minsky's latest work "The Emotion Engine" is simply a retread of his decades-old "Society of Mind." I suspect that now that Brooks and the new guard are moving in, the old guard is looking for someone blame its lack of results on.
Not-so stupid little robots (Score:2, Interesting)
I personally built and programmed one of these "stupid little robots"; it's a wheelchair programmed to navigate in an office environment, using vision to determine where in the office it is. Nobody asserts that it can "reason". It navigates using a collection of local effects, in much the same manner that simple creatures operate. Watch the film "Baraka" for some rather amusing examples. At one point the film shows a bunch of caterpillars, each following the scent trail of the next --- unfortunately someone flipped the first one around, so it follows the last, and the whole colony just moves around and around until they die of starvation.
I think you would be surprised how easily remarkably complex behaviours can be achieved by a collection of very simple responses. Try fiddling around with Rossum's Playhouse [sourceforge.net], and read Brooks' book Cambrian Intelligence.
blame the (pseudo)biologists (Score:2, Interesting)
The problem with AU lays in poor biology. As long as it is based on pre-cybernetic (i.e. traditional, neodarwinian) biology, AI will never go anywhere. The only known intelligent systems are biological systems. To create AI, you need to imitate biology, you need to reverse-engineer what it is exactly that make biosystems special. But traditional biology has totally misled computer science. Pre-cybernetic biology, the biology you find in most books and the one taught in almost any classroom, cannot even define life. This pseudo-biology is the `biology' of the non-living, and as such, of the non-intelligent.
To create AI, you need to understand natural intelligence (NI) and for this you need to understand life. What is life? Cybernetic biology defines life as molecular autopoiesis. Which is interesting, since this definition of life is based on computation. Autopoiesis is the key here. The self-re-computation of a system is the key to life, and the key to intelligence, because you need a self to be intelligent. With an artificial self, we could have AI, and probably self-awareness. But good biology is the key.
? Unfortunately, it's not going to happen anytime soon. Biology is totally stagnant, and the Neodarwinian Cabal precludes any progress and silences any dissent (sort of a M$ of the science market). `Official' bilogical sciences just won't deal with life. And that's not going to change for a while, I'm afraid, no matter how hard sone of us try.
Re:Will we ever have *real* AI? (Score:3, Interesting)
We act based on external stimuli and based on what we have learned as far as I know.
Unfortunately, we will never fully understand how we are "made" and how we "work".
And without being able to fully introspect ourselves, we will never be able to build a computer which works exactly like a human.
How could you possibly create something to be a replica of something you don't understand?
Cognitive science has made immense progress, but it is still all models and theory.
And as human, "logic" animals, we will always be modeling what we are learning to fit inside our own "understanding". We are locked in our own box...
And if it is all "maths" or "logic", a computer can do it to. I am pretty sure that not so far in time, we will see robots who act very much like a human being.
Will it be considered a real human being because of it? Will it really be an "intelligent" machine?
I don't know, that's not a technological debate, it is a philosophical one. How do you define real AI anyway?
Does it have to be "alive"? If I ever create a unicellular bacteria, and that it is alive, is this considered "AI"?
In this case, it would be well alive and totally artificial, but not very smart by any measures!
On the other hand, if I create a robot which looks like a human, has flesh, eat food, cries, smiles, makes mistake, learns, have fun etc etc.
Will this be true "AI"? It wont be alive after all, it will only be made of steel, a cpu, platics, millions of "if statements".
But to anyone looking at both, I'm sure this one would look a lot more "intelligent" then the bacteria.
If the robot body in itself is realistic enough, maybe you could even fall in love with it, could you?
And what if it falls in love with you also? That everything is going fine for a couple of years
before you realize it is in fact a "robot"? Would you turn around because it is not real "intelligence" or because it is not a biological body?
In that case, could we conclude that we are ourself programmed to "accept" that something is inteligent based on criteria that have nothing to do with inteligence per see?
What is inteligence anyway? How do we measure it?
I am not flaming you at all by the way, I just love those debates
Minsky + Brooks (Score:4, Interesting)
"AI has been brain-dead since the 1970s."
I agree, unfortunately. At least, what was traditionally meant by "AI" has been brain-dead. There is very little focus in the field today on human-like intelligence per se. There is a lot of great work being done that has immediate, practcal uses. But whether much of it is helping us toward the original long-term goal is more questionable. Most researchers long ago simply decided that "real AI" was too hard, and started doing work they could get funded. I would say that "AI" has been effectively redefined over the past 20 years.
"The worst fad has been these stupid little robots."
Minsky's attitude towards the direction the MIT AI lab has taken (Rod Brooks's robots) is well-known. And I agree that spending years soldering robots together can certainly take time away from AI research. But personally, I find a lot of great ideas in Rod's work, and I've used these ideas as well as Marvin's in my own work. Most importantly, unlike most of the rest of the AI world, Rod *is*, in the long run, shooting toward human-level AI.
Curiously, just last month I gave a talk at MIT, tited "Putting Minsky and Brooks Together". (Rod attended, but unfortunately Marvin couldn't make it.) The talk slides are at
http://www.swiss.ai.mit.edu/~bob/dangerous.pdf [mit.edu].
In particular, I shoot down some common misperceptions about Minsky, including that he is focused solely on logical, symbolic AI. Anyone who has read "The Society of Mind" will realize what great strides Minsky-style AI has made since the early days. I also show what seem like some surprising connections to Brooks's work.
- Bob Hearn
Don't turn slaves into humans! (Score:3, Interesting)
Re:Will we ever have *real* AI? (Score:0, Interesting)
I think this knowledge will reamin out of our reach for ever.
A solid theory of the ongoings of our brains, would at the same time be a solid theory of how god works, and I just can't see how one would understand something that is bigger than all of us.
To those who want to explain everything with mathematics, I've always said "make a differential equation that models my soul, then tell me what my favourite colour is". That shuts them up allright.
We have already shown that there are fundamental uncertainties in nature (Heisenberg), can you be sure that these uncertainties are not divine intervention, simply what really gives us free will. Remember that it's almost 250 years since Darwin wrote his Evolution of the species, and scientists have yet to produce a solid proof that this is indeed how things work. I don't see how we would ever be able to create an entirely autonomus entity (AI) with this in mind.
Re:AI...heh (Score:2, Interesting)
Define soul. What is that?
It just follows some programming. At the most basic level, it's just a binary program. It follows whatever instructions it was given.
At the most basic level, our brains are single neurons, which are molecules, which are atoms... etc. down to quarks or whatever is at the bottom. All we are, everything, is simple matter organized in an extremely complex way. Surely intelligence and consciousness can't be the result?
There's nothing special about us, other than we are very complex structures of matter.
I honestly don't think we understand what makes a human consious or what makes someone be that person well enough to try to replicate it in software. You can make the logic more sophisticated, but I doubt we'll ever make them truly "think." And even if we did, how could we prove it? If you think about it, how can you prove anyone other than yourself is consious?
Here I have to agree with you somewhat. It IS a big problem to figure out when a structure of matter is intelligent or consciousness.
Re:Will we ever have *real* AI? (Score:2, Interesting)
To paraphrase, we need to stop trying to build a human mind and just build something which does what we want it to do.
The problem is deciding what we want the computer to do. The Turing test is unreasonable, because we can't make a computer describe its experiences and thoughts in the same way a human can. I mean, if YOU were trapped in a box sitting on someone's desktop your whole life, would YOU act anything like a human? Probably not. I think many people expect to see a machine they turn on and all of a sudden it acts 'alive,' sort of like Frankenstein's monster. I think the AI machine will be more like a baby, where it just spits out nonsense for awhile, until you 'grow' it into something more interesting.
And, I don't think the AI machine will really resemble a human mind, just as an airplane doesn't look much like a bird. We'll discover algorithms that will approximate the functionality of a bundle of millions of neurons, but obviously, like a plane doesn't maneuver as nicely as a bird, it won't be nearly as flexible as a human mind.
Artificial Intelligence Is Magic (Score:3, Interesting)
Now, with that in mind, let's look at artificial intelligence. AI has always been about trying to convince an audience that a machine is thinking. This is demonstrated by the very existence of the Turring test and many products (such as the Aibo, Furby, etc) that try to mimick emotions. If the audence is entertained, amused, or convinced, the AI is considered good. Bad AI is when the audience can see right through it.
Artificial intelligence is magic. It's a trick. It's an illusion.
It is no surprise then that AI hasn't really advanced. The trades of showmen are practically unchanged for hundreds of years. Razzle-dazzling an audience involves technological advances, but it remains unchanged. Even in the cases where "artificial intelligence" is used to aid in medical diagnosis ("expert systems") or manufacturing are really only following man-made logical structures. The computers aren't thinking, they're only doing what they're told to do, even if indirectly. The end result is impressed people who think the machine is smart.
Of course, you don't have to take my word for it. If you want to see how badly AI is going nowhere, I hightly recommend reading The Cult of Information by Theodore Roszak [lutterworth.com]. While his focus is not on the fallicy of AI, it covers it in context with the much broader disillusionment of computers by society.
Now, what does AI need in order to progress? Probably AI creating other AI. Something with a deeper embodiment of evolution. As long as it's man-made, it will never be intelligent, just following a routine. Of course, I am going to stop right here... I am not qualified to offer a solution these obstacles.
Let's Talk about Intelligence (Score:2, Interesting)
A good argument can be made that a polecat (wild ferret) is more intelligent than many humans. For example, the polecat can survive outdoors with no assistance. The polecat can eat, sleep, have babies, and be more or less comfortable.
Where does human intelligence come in then? Human intelligence is learned. Of course a polecat at 4 months is more capable of surviving than a human at 4 months. Does this make the polecat more intelligent? But let's try and remember that the polecat is done developing, while the human has about 20 more years until full maturity.
So the human learns then. Plainly, the human learns more than the polecat over the course of 4 years than the polecat. So is the human more intelligent? I think we can unequivically say yes.
But what is it that makes human intelligence, and how is it different from a polecats? The answer is learning. But how does learning work?
Learning is a specific thing. People learn by rote. (Don't let someone tell you otherwise.) It is mimicry that teaches morals. Logic teaches ethics, but logic is learned like morals. This means that, basically, we learn everything.
The point is, if you think there is any difference between you and a polecat, I would like to point out that there is less difference between you and Alicebot.
If you want proof, look at how musicans or epic lyrists work. They learn specific phrases and use them over and over. Listen to your own speech or read your own writing. You'll find that you use plug in words and phrases. They'll be similar to your friends and parents, btw.
Re:I wonder what... (Score:3, Interesting)
Rodney Brooks (who's The Man) said something like "a [working] robot is worth a thousand papers." Instead of a top-down view, subsumption architechture robots have a tight connection to sensing and action, but often no memory. One such robot was able to search out, find and grab empty coke cans, then take them to the trash!
(semiquote from Steven Levy's "Artificial Life"; highly recommended introduction.)
Re:What about my AIBO? (Score:4, Interesting)
What about pattern recognition? How long do parents spend holding up pictures of various animals or various shapes for their children to identify?
When it gets right down to it, every one of us has been significantly programmed by our parents, teachers, and government. I am not arguing against the system, just saying that's how it is. I don't believe AI as anticipated will ever truly exist because the degree of creativity and imagination desired exists only in humans either because of an all-knowing, all-powerful creator or millions of years of mutations.
Don't build robots, simulate them (Score:5, Interesting)
Human Level AI's Killer Application - Interactive Computer Games, John E Laird and Michael van Lent American Association for Artificial Intelligence AI Magazine Summer 2001 pp 15-25
My summary of the above - the AI in games might not be too hot (some would dispute with the academics about that but let it go), but game environments themselves are complex enough to pose a challenge for state-of-the-art AI researchers.
Problem complexity (Score:2, Interesting)
When we try to emulate a system with an other system that is different in nature a lot of capacity is wasted.
That said genetic programming is one of the fields where we actually see truly intelligent solutions to problems completely generated by computers. Problem is the algorithms need computational power beyond our wildest dreams to even be comparable to single cell organisms in ingeniousness.
After all the nature has had 50 gazillion years to evolve.
I went to a "BOOM" conference at Cornell... (Score:5, Interesting)
So I found myself standing in front of a computer screen. It was a worm swimming through water! In 3D! In real time! After I pushed my jaw shut, I began to ask the genius student some questions...
"Is that real-time?" "Well, actually, no, that is a 10 second looping clip that took a week to calculate."
"Well, I see a neural map there. Is that complete?" "Well, actually, no, that is a simplified version of the real nematode nervous system, on the order of about 1 simulated neuron to 10 actual neurons."
"So you simulate neurons! That's awesome. Let's see the code." (He proceeds to flip through 4-5 pages of very sophisticated-looking mathematical equations to describe the behavior of ONE neuron.)
What a let-down! No wonder Minsky is pissed, real AI is HARD!
Software is behind, not hardware (Score:5, Interesting)
Actuarial "Racism" (Score:2, Interesting)
At least humans can get the picture of what is and is not allowed to study lest they draw politically incorrect conclusions, so government-funded academic researchers can be made politically reliable. Can you imagine the hell that would break loose if a genuine AI started drawing its own conclusions from actual data?
When AI... (Score:3, Interesting)
Such a model is years off, though, AFAIK.
Not so sure (Score:4, Interesting)
Dreyfus argument is old, and its rebuttals are well-known. Consider that symbolic systems are not limited to context-free predicate logic.
The progress of AI is uncertain, but it is certain that there's no future for symbolic logic AI.
It is not certain for me.
Both connectionist and symbolic approaches may succeed if given enough time. However, I think that obsession with neural nets of many people here is of the same nature that obsession of numerous early aviation enthusiasts with wind-flipping devices. Certainly you can mimic mechanics of nature with some effort, but there are usually better ways to do the job.
Re:Maybe the problem is Minsky himself? (Score:4, Interesting)
Robots are not a bad thing to work on if other kinds of AI are going to have a chance, because a more holistic kind of AI would recognize that intelligence and cognition first emerged as a function of having a physical body. On the other hand, it's just robotics, it's not AI itself.
Also, AI was good for the hackers who supported its development on computer workstations. Systems like the Lisp Machine still compare very well to current languages and tools.
What's the difference? (Score:1, Interesting)
Check out computational neurobiology (Score:4, Interesting)
Re:About Minsky... (Score:5, Interesting)
Oh, say, Rod Brooks, Tomas Lozano-Perez, Hal Abelson, Gerry Sussman, Eric Grimson, Pat Winston, Tom Knight
The difference between Minsky and the rest is precisely as the first poster asserted. Having read Minsky's books, known him professionally and personally, and having taken his course, I must agree that the amount of weight placed on his words are not equal to their value. As others have observed (I forget whom and where), Minksy's original contributions were interesting ramblings at the edge of a new field which happened to pinpoint rich veins of research in some cases, and kill off valuable paths in others (think perceptrons which are, yes, in fact, very useful things, and yes, in fact, do model real neurons reasonably well, and no are not computationally impoverished unless you abide by Minsky and Papert's artifice of only single layers). In otherwords, in some cases, he got lucky, in others he fell flat. This initial success led him to continue pontification (think "Society of Mind", a book of little real contribution), while doing marginally small amounts of actual research. Rod Brooks, in contrast, has made far more, and far deeper, contributions working on his subsumption architecture.
Minsky's course (at the advanced graduate level) consists of students listening to his musings and ramblings which he often repeats through the term, since he has no syllabus, no agenda, and no apparent desire to teach. When he gives talks, they are all extemporaneous; someone like Churchill could pull that off, Minksy's stream-of-consciousness style keeps his acolytes happy, but leaves those with real thirst for knowledge quite parched. Does this not fit the accusation?
So what if Minksy thinks graduate students shouldn't be soldering robots? Does that matter? So what if the current AI field isn't following his pet projects, is he making any contributions himself? We've made tremendous strides in AI over the past decade; they just haven't been where Minksy thinks they should be, despite his questionable over-all track record. Exactly why should anyone care that much?
outside the box and inside the box (Score:3, Interesting)
Imagine what kind of thing you would be without vision, touch, smell, hearing or the ability to move and change your environment. Without these forms of interaction where would human intelligence be?
Seems that a Budhist philosphical approach is most helpful here, ie we are our parts, not more and not less. We are what we are. If you wish to create something that is like a human, you should take an inventory of our parts figure out how they fit together and try to find analogous electronics, software and hardware.
Which is precisely what a lot of the robot folks have started doing. Except that most have started a bit smaller and have modeled insects instead. Finding that they can model seemingly complex insect behavior with simple algorithms and machines.
Although, perhaps the next best step isn't building real robots at all, which can be expensive, error prone and time consuming, but building virtual robots that can be placed in virtual environments of our invention, somewhat like a "Matrix" virtual reality with intelligent agents that can learn. This approach is more computer intensive, since the environment as well as the agent would require large amounts of computing resources, also, the agent would have to perceive the "environment"
Seems that many more forms of human nature could be investigated in this way.
Re:The Cyc project (Score:4, Interesting)
This reminds me of a quote from French mathematician Henry Poincare: "just as houses are made out of stones, so is science made out of facts; and just as a pile of stones is not a house, a collection of facts is not necessarily science."
Applied to the cyc project: a collection of facts is not necessarily intelligence.
Would a computer think that you are intelligent? (Score:3, Interesting)
If you were put inside a little white box where you had to flip millions of switches on and off according to certain simple rules, you would look like an idiot next to a computer. A computer can't walk around and recognize things, and doesn't know what an apple is, so what? In my opinion, machine intelligence should be focused on making computers able to make themselves better at what they do best. I'm not sure what a super intelligent computer system would be used for, and I don't think that I would even be able to imagine what would be possible. I would be interested to know what other people think about this idea. Most of the things that I can think of tie back into the "real" world somehow. What would a self-organizing non 3-dimension oriented intelligence be able to do?
Saying that AI is impossible because computers can't come into "our world" of three dimensions, or understand our literature is kind of intelligence chauvinism.
Re:What use is AI without an operating platform (Score:2, Interesting)
One of my major projects while at the University of Oklahoma was an Open Source 'AI SDK' - a framework to build and research AI by providing the wheels which had already been invented. Unfortunately, every time I talked with an 'AI' researcher about this I got one of several responses:
1. We don't need it, my [insert project here] is the True(tm) way - and with the [insert latest breakthrough in computer performance, modeling tools etc] we will win the race!
2. How dare you think you know enough about [insert project here] to do anything with it? Only my well-paid graduate slave^H^H^H^Hstudents could even attempt it, an only with my special insights.
3. You don't need all this other stuff like support for [insert other projects]. The SDK will be too big and slow to do anything well
4. Neato! I'll have a [insert soon to graduate student] to look into it. (Never get a response.)
These were the kinder remarks I got. I won't go into the phone call I had with an engineering professor who simply ranted for 10 minutes at how CompSci people are all stuck-up theoreticians who can't make anything to save their lives. The truth about A.I. research is that it is a fragmented ivory tower with little fiefdoms rulled by professors with tenure. I've met some really cool people and learned some impressive stuff doing an A.I. SDK (You should see the wall of textbooks you can accumulate.) But, very rarely have I encountered someone who goes to the conferences to talk with their fellow researchers rather than just present the progress of the latest and greatest OneRightWay(tm).
I still got the sources in CVS for part of the framework, but with the (dis)encouragement I got, its painful to look at the sources without remebering all those disapointments...
Re:MOD PARENT UP (Score:4, Interesting)
I get the impression though that many AI researchers see Hofstader as a heretic. That's too bad because I think the ideas he and his team have developed hold more promise than any other approach to AI currently extant.
Don't listen to 'ol Minsky (Score:3, Interesting)
As an AI researcher and someone who's read Minsky's books and listened to him talk - I can say that he doesn't know what he's talking about. He was big in his time, but things have moved on and he hasn't. He is an old, pesimistic, armchair AI 'researcher' who still thinks AI is easy. He doesn't understand why AI needs to be embodied and situated.
Having said that, I do agree that AI is almost going nowhere (anyone can see that). But I don't believe Minsky understands why.
Those 'stupid little robots' are the best thing to happen to AI - unfortunately most AI 'researchers' don't really understand what they're doing. Consequently, 97% of the time and effort purported being spend on AI research, isn't.
With a few exceptions, the main reason for the 'advances' we're seeing in AI/robotics now, is that algorithms are riding the wave of advances in computing power.
My guess is that you'll see most of the advances in AI coming as more and more 'real scientists' from other disciplines - such as ethology, biology and neurology - get involved in it.
Keep in mind that this is my opinion - shared by an increasing number of people in the field, but still a small minority.
Re:Minsky only has himself to blame. (Score:3, Interesting)
The speech recognition community also investigated techniques using neural networks, although they did not produce a clear vin over the statisical technique called hidden Markov modelling.
AI techniques, such as espoused by Marvin Minsky, routinely completely failed when presented with anything approaching a real-world challenge.
IMHO the AI investigators who have a hands-on approach to making robots deal with real environments are the only ones who are likely to pull out AI's reputation of unusable results.
Semi-artificial intelligence (Score:3, Interesting)
The real money will begin to flow once the humankind will stop being scared of direct integrating of humans into computer networks.
I am not sure when, but ultimately all keyboards, mice and screens will take their places in musiums. People will communicate with computers and each other by connecting computers directly to their brain. Thus, the solid knowledge of natural intelligence will be required.
I think first researchers are already working on it in military-sponsored labs. Of course volunteers realize that they can be seriously damaged or dye, but death is natural in military industry. Military industry operates with huge amounts of money. But that's often not exactly a "free" market - all contracts are signed through lobbiing and bribes.
Once first "Unisolders" will be available on the market (sorry, on the job market), then next to militaries there will be strong demand from real-time traders. And that's a real market. Traders will line up to make a neuro-surgery to be connected to those-days electronic stock markets.
I am not sure when such "UI" will be available on the market, but once it will be there, at some point geeks will buy it. The rest of us will be in the front of the tough choice: to stay 100% "natural" or to win a better job contract.
Now, where is AI? The answer is simple: ultimately there will be nor AI neither NI (N as natural), there will be SAI: Semi-Artificial intelligence. No need to think in English letters if UI can get concepts you think of. No need to count numbers if software can do it for you *AND* some AI can do reasoninig about when, why and how you want it done. The trick is that no need to automate the reasonining 100% as your brain is already connected and can do part of the job in that reasoning.
For example, no need to create a very complicated DB query as SAI can use part of your brain to post-filter a small set of data after the pre-filtering of a big set data is done automated in DB engine.
Many problems of software development can be solved if, in addition to humans using computer, computers will use human brains.
That's what i call SAI.
I'm more than a little skeptical of Minksy (Score:2, Interesting)
The nineteeth century debate between two camps of biologists, "Vitalists" and "Mechanists," is very similar to the debate between those who think machines can eventually have intelligence and those who think only biological systems can possess intelligence.
Vitalists believed that living beings had something more than their physical and chemical composition which differentiated them from non-living matter. This difference was a "vital spark" or elan vital which made them innately different from ordinary or "dead matter." Their opponents, the "Mechanists" believed that living things were essentially no different than non-living things, at least in terms of what they were composed of. That there was no "vital spark" which separated living and non-living things but rather only a difference in their physical and chemical compositions.
Obviously the "mechanists" won since no modern biologist believes in the elan vital.
In a very, very similar fashion, Minsky and his supporters seem to be making the same type of argument. They seem to want humans to still have a "soul," called intelligence, something that "dumb" matter can never have. Whether they argue for a mysterious quality that only biology systems seem to possess or for mystical "quantum processes" that seem to only take place in brains and not in machines I still call this vitalism and I don't think its scientific at all. It's more like an intellectual retreat to defend some deep seated emotions about humanity's place in the Universe.