Artificial Intelligence Overview 204
spiderfarmer writes: "Well, it feels slightly odd to suggest one of my own articles, but here goes. I've recently completed a brief overview of the current state of AI. The article concept was focused on Cyc, but scope creep being what it is, I ended up doing an overview of the entire field. Some of the Slashdot gang were fairly helpful in pointing me towards experts who would talk to me and towards white papers and books I might not have otherwise found. So, I thought they might be interested in how I put all the information together."
Media and AI (Score:4, Insightful)
Re:Media and AI (Score:2, Informative)
It is clown-like, dishonest, and absurd for Dr Lenat to claim Cyc is conscious.
For the author of the article to have taken this seriously (and then to, eg, think that the Cyc people won't let her "interview" the software because she might ruin it) means that she (whatever her fine qualities) has /alot/ of learning to do.
Re:Media and AI (Score:2)
Take a wild guess why he does it.
Re:Media and AI (Score:2)
The Slashdot Scientific Review Technique: If any scientist says anything other than "we don't know," he or she is wrong.
Re:Media and AI (Score:2)
Re:Media and AI (Score:1)
Please mod this up! (Score:1)
Semantics: Why AI doesn´t work (Score:1)
Here its much better. Sometimes you advance...
Frank (http://www.fraber.de/ [fraber.de])
Cyc hype (Score:2)
Cyc is basically a big encyclopedia of common-sense statements coupled to a simple inference engine. There may be uses for such things, but they're not anywhere near intelligent. What you get out is not much more than what somebody else put in. Sort of like Ask Jeeves. [askjeeves.com]
Please... (Score:2, Funny)
What amatuerisness.
The 'Human' condition... (Score:1)
I've got points to ponder. Like, if true AI is spawned as an internet node, it would reflect the environment? Prolly pr0n. Or, equally, would it condemn political systems, and what would it use as its baseline? Would it have a social insect model for thought?
As for learning, I believe it could learn things, but adapt? It might get better, but as for being able to fathom its 'creator', metaphysics might be lost on it. Ethics might be another touchy subject. Unless it can hurt, and empathise, I doubt it would make a good politician. (WHOA! Forget what I just said, do any of them?)
Contemplate art, social values. Would it even have the same ideals of freedom?(yes, I read the posts).
What about good vs. evil, would you teach it to I Robot? What would you use for definitions? How could it apply them? Would it apply them?
And can we even define rational thought as a computation, especially with emotions. If the programs resources were low, would it claim to be having a bad day? How would we react to a cranky AI?
Would it sense how we treat one another, and make moral judgements, tempered with wisdom and mercy? That might take alot of expectation.
I think we should be happy with modelling aspects, not try to create a being which would be quite lonely(if true AI). It might only be good for the perspective of what it thinks of us. I wonder what it would think of us?
Like everyone I wrote an Eliza program (Score:1)
Eliza: Hello, how are you today?
Me: Fine
Eliza: Hello, how are you today?
Me: Not bad
Eliza: That's very negative
Me: f**k off
Eliza: That's not very nice
etc etc etc
All was okay until the lecturer tested it on a telytype and it dumped sending all sorts of control codes to the telytype...
You really had to lunge for the power button to prevent massive forest devastation
Re:Like everyone I wrote an Eliza program (Score:3, Informative)
Re:Like everyone I wrote an Eliza program (Score:1)
I could simulate a person via code (Score:1)
Step 1) a 100% trustable coder needs to input some data on verbs and nouns.
Step 2) a 100% trustable coder needs to input contextual sentences.... Explaining to the computer that given a situation A, and actions B, the result C will happen.
Then after a long string of events are fed into the computer, possibly via natural language(with the computer asking what it doesn't know a first)... Then the computer can "interpret" the events when it sleeps, which is basically just the same as when people dream
Re:I could simulate a person via code (Score:1)
Re:I could simulate a person via code (Score:1)
But if you represent various code objects with certain relationships to other ones, it can still be intelligent (of course assuming the Ai code is in there in the first place)
Just gotta remember to let the AI config its own code, not just add and change objects... thats when you get the Real Interesting Stuff =]
Hmm..
"All you need is Code,
Code is all you need"
(sung, badly, to a similar Beatles song
Few comments (Score:2, Interesting)
Examples: taking measurements of temperature in a region over 50 years and trying to predict climate change is statistical problem, while analysing samples of minerals in an area to try to find oil or gas is data mining. (as, presumably, mineral composition does not change over time so only single sample from each point is taken)
Re:Few comments (Score:2)
This is only true for some areas of AI research. Researchers coming from congnitive science and neuroscience are very interested in discovering how thought works in humans and frequently use AI (often neural networks) as a research tool. In this context, the research is about how humans think, and by extension, how to make a computer think in the same way humans do.
Re:Few comments (Score:2)
It is my honest belief that current research in AI is not so much about how to make computers think, but rather about how to reproduce the behaviour tranditionally associated with thinking. The distinction is important.
Isn't this the same distinction that Alan Turing had decided was ultimately irrelevant? What's more important, the process or the result?
Statistics (Score:2)
Examples: taking measurements of temperature in a region over 50 years and trying to predict climate change is statistical problem, while analysing samples of minerals in an area to try to find oil or gas is data mining. (as, presumably, mineral composition does not change over time so only single sample from each point is taken)
Nope. Disagree.
If you want a unifying theme to pull together most of AI, you could do worse than think of it as a cookbook of techniques for designing, building and then automatically refining statistical models of (some aspect/s of) reality.
Pretty much all of the standard problems of AI can be presented to advantage in this framework (including the identification of new objects/concepts/patterns and associations). It is the evolutionary enhancement that our brain has conferred by automatically doing adaptive and hierarchical statistical modelling (incl. pattern recognition, but also much more) so well that is basically what makes 'intelligence' worth having.
The important thing about such a statistical model is *not* certainty that the model is right -- it never will be (not even for your temperature data). The 'long-run densely-sampled stationary time-series' is a myth -- in reality it just doesn't happen. The important thing *is* to realise your model is imperfect; to allow (as well as you can) for that uncertainty in your model; to explore different ways of setting up your model; and then to use statistical inference (Bayes theorem etc) to improve your predictive model as the data comes in. Only by allowing for the imperfection can you learn from the data.
The priciples of statistical modelling have a central place in the study of Statistics -- it's the underlying logic that most of the subject is built on. On the other hand the blind application of certain 'standard' statistical tests seems distictly peripheral.
Re:Statistics (Score:2, Informative)
There is a renegade group of AI folks who study uncertainty -- see http://www.auai.org. Its curious how an entire sub-field can be overlooked in this general discussion.
Actually, the decision-theoretic approach that much uncertainty research is based on (Bayes methods included) stands in contrast to the cognitive approaches in the AI mainstream. The term "statistical" carries too much baggage, some of which litters this thread. For fascinating insight into the rough edges of AI and science generally, look into the controversies between classical statistics, Bayesian, logicalist and other approaches to reasoning.
Chess v. Poker. Perfect vs. Imperfect Information (Score:1)
The article of course mentons Deep Blue and chess: I find chess programs, and indeed the problem of chess, relatively unimpressive. Chess is a game of at least almost perfect information, and almost pure deductive logic.
[I'm not sure I agree with those who say chess is a game of perfect information and pure deductive logic. I believe imperfect, probablisitc information, and induction may come into play under certain circumstances. You offer a sacrafice to set a trap. Will your opponent see the trap? Will he take the sacrafice? If he does, great. If he doesn't, perhaps you have wasted a move, and allowed him to seize the initiative. There is an element of induction and probability in making your decision.]
Let's face it, pretty soon the World Chess Champion will be a human only because computers are excluded from play. Hell, pretty soon your laptop will consistently beat the (human) World Chess Champion while you watch (the DeCSSed version, shh, don't tell anyone) of Matrix V and recompile Linux Kernel version 4.4 at the same time.
Poker, thank God, is different. As explained by The University of Alberta Computer Poker Research Group [ualberta.ca]: The University of Alberta Computer Poker Research Group [ualberta.ca] has implemented a poker playing program named Poki [ualberta.ca]. Poki is implemented in Java, and some of the source code [ualberta.ca] has been released. To facilitate other research into poker, they have also provided a Texas Hold'em communication protocol [ualberta.ca], which allows new computer programs and humans to play against each other online.
See also:
Wilson Software [wilsonsoftware.com], makers of the best commercial poker software. There are free Windows (sorry) demo programs for: Texas Hold'Em [wilsonsw.com], 7-Card Stud [wilsonsw.com], Stud 8/or better [wilsonsw.com], Omaha Hi-Low [wilsonsw.com], Omaha High [wilsonsw.com], and Tournament Texas Hold'em [wilsonsw.com]
rec.gambling.poker [rec.gambling.poker] [Usenet]
IRC Poker Server [cmu.edu]
Greg Reynold's Gpkr GUI [anet-stl.com]
World Series of Poker [binions.com]
Great Poker Forums [twoplustwo.com]
Card Player Magazine [cardplayer.com]
Poker Digest [pokerdigest.com]
Gambler's Book Shop [gamblersbook.com]
And now, if you will, may we please have a moment of silence for Stu Ungar [pokerpages.com].
Re:Chess v. Poker. Perfect vs. Imperfect Informati (Score:1, Insightful)
Poker is an excellent domain for artificial intelligence research. It offers many new challenges since it is a game of imperfect information, where decisions must be made under conditions of uncertainty. Multiple competing agents must deal with probabilistic knowledge, risk management, deception, and opponent modeling, among other things.
Actually, while imperfect information makes poker a different problem than chess, it doesn't make it all that much deeper. I did research on a poker engine and one thing I learned is that it is possible to play poker (or bridge) using searches similar to those in chess. You can sample possible game trees instead of searching static ones because it's only necessary to pick a stratagy that make more money than it loses on average.
All sorts of things that look like psychology in poker (bluffing, slowplaying etc.) turn out to be mathematically tractable and actually mathematically provable necesities to competent play. There has to be some chance that you will misrepresent your hand, otherwise your play gives too much information to your opponent.
The really hard problem, as everyone probably already knows, is analysis and planning in situations where the search tree is too big for a chess like search and where sampling isn't good enough. Go is everyone's favorite example and that's a game of perfect information.
By the way the CYC project mentioned in the article (as self concious!!) will not be the slightest help in making machines talented enough to play go (or poker or chess or even do crossword puzzles for that matter). Nor will it be of any use in teaching a computer to see, hear, touch, manipulate objects or imagine. Nor will the program be able to make useful analogies, so it won't be able to communicate in a human way (and I also think it won't be able to think in any useful way either). In my opinion Dr. Lenat's only great accomplishment is performance art. He's fooled people into wasting 50 million dollars on a complete fraud. And he's still going. Chalk one up for NS (natural stupidity).
For an intelligent overview of what's wrong with CYC (unlike silly falicies from the likes of Roger Penrose and John Searl) and what an alternative might looke like I recomment Douglas Hofstadter's wonderful book, "Fluid Concepts and Creative Analogies: computer models of the fundamental mechanisms of thought"
Cowardly Lion
AI references (Score:2, Informative)
Anyway, extra suggestions:
Evolutionary algorithms
Swarm intelligence
Distributed AI
Alife
Check it out with google. mucho interesting AI stuff.
Re:AI references (Score:2, Interesting)
For some reason, the American military have been looking into the http://www.scn.org/~mentifex/ [scn.org] AI Home page predating the Mind project by seven years, as evidenced by the following logs of recent military accesses:
24/Jul/2001:10:59:39 - nipr.mil -24/Jul/2001:11:04:27 - nipr.mil -
24/Jul/2001:11:04:34 - nipr.mil -
24/Jul/2001:11:04:41 - usmc.mil -
24/Jul/2001:11:06:24 - nipr.mil -
24/Jul/2001:11:11:56 - usmc.mil -
29/Jul/2001:11:37:10 - navy.mil -
29/Jul/2001:11:40:56 - navy.mil -
30/Jul/2001:07:38:45 - arpa.mil -
07/Aug/2001:07:22:58 - pentagon.mil -
07/Aug/2001:14:44:12 - af.mil -
07/Aug/2001:14:44:16 - af.mil -
07/Aug/2001:14:48:19 - af.mil -
08/Aug/2001:11:21:48 - army.mil -
08/Aug/2001:11:22:02 - army.mil -
08/Aug/2001:22:18:15 - nosc.mil -
Re:AI references (Score:1)
I thought the innate grammar theory was refuted long ago.
Evil perception of AI replaced by Medical Science (Score:3, Offtopic)
Nowadays AI is never mentioned in popular media. It has been replaced by the new emphasis in public-facing science: cloning and gene therapy.
This is the new AI in the mind of the ordinary citizen. It will lead to the destruction of the human race, and poses many ethical and moral questions. In the UK it is being demonised by the popular press without real debate, much like AI probably was 20 years ago.
Incidently, this was an excellent "heads-up" article for a novice like me, and I gained significantly from it.
Re:Evil perception of AI replaced by Medical Scien (Score:1)
Re:Evil perception of AI replaced by Medical Scien (Score:2)
This is true for now.
Then there will be some hit hollywood movie where the evil AI Robot Scientist and his/her/its borgified army of bio-engineered clones tries to take over the world via a mutant form of gene therapy that manipulates the brain and renders people into zombies, pliable to any suggestion.
of course, the best way to take over the world is to do it via marketing, and legal agreements. You leave things like wars and armies to silly politicians. You can control the banks and the infrastructure.
Re:Evil perception of AI replaced by Medical Scien (Score:2)
I think the reason cloning, gene therapy, stem-cell research and the like gain so much media attention is because they focus on an issue very important to almost everyone on the planet: the ability to produce/modify babies and are associated with reproductive functions.
White Van Man may not care about evil robots, but he certainly cares about medical advances concerning his "knackers"!
Data Mining and Machine Learning link (Score:1)
What any AI needs (Score:2, Interesting)
The basics to get self-aware systems is to define a self looping reasoning for the system. That is, the system must be able to observe it's OWN thoughts not only what is happening outside. It must be able to react and change it's thoughts by it's own thinking. That is very important. That also means you need to create a basic language system of some sorts for those underlying systems.
All in all it's complex but very interesting problem. Complex as in figuring out the basic underlying system and the defining parameters. If we get those done correctly, the AI will 'grow' and build upon them like a learning human would, by interacting with it's surroundings and with it's own thoughts.
Re:What any AI needs (Score:2, Insightful)
The basics to get self-aware systems is to define a self looping reasoning for the system. That is, the system must be able to observe it's OWN thoughts not only what is happening outside. It must be able to react and change it's thoughts by it's own thinking. That is very important. That also means you need to create a basic language system of some sorts for those underlying systems.
How can you be so sure of this? I for one am fully self-aware, and I strongly doubt that my own thoughts are observable by introspection, except for a very small part of them. AFAIK there isn't yet a man made self-aware system that works the way you prescribe.
Re:What any AI needs (Score:1)
Oh but you can and do observe your own thinking. Every time you do a conscious decision you are able to observe, and in fact change your decision based on your observation. If you didn't have this ability you wouldn't think it over, you would just act.
I guess what you want to say is you can't observe your sub-consciousness. But that's irrelevant, you only need to be able to observe the higher levels that make _conscious_ decisions. I could draw a parallel here to the AI, saying that the implementation and what goes on in the deeper levels of the program, the algorithms etc, the AI does not know. It only knows the higher level of that, some subset of it's inner workings. That's an artificial example and not quite valid, but if you're a programmer maybe you understand the point. Of course as we program what it knows, we could potentially make it more self-aware of it's inner workings than humans are of their own.
Re:What any AI needs (Score:1)
Oh but you can and do observe your own thinking. Every time you do a conscious decision you are able to observe, and in fact change your decision based on your observation. If you didn't have this ability you wouldn't think it over, you would just act.
How often do you think about your own thoughts? My guess is something like less than 5% of the time, the rest of the time you just act. Being self conscious to me is not observing one's own thoughts, but observing one's own actions.
I know yours is the stance in GEB, and Edelman also said something similar, but that was long ago. I respect your view, although I don't agree with you.
Re:What any AI needs (Score:1)
Ok... My opinion is greatly affected by my personal experience. And that was the day when I suddenly realized "hey, I'm thinking.. this is cool!". And you're probably quite right about that 5%, we don't think about it often. And my opinion is most of the time we are not very self-conscious. We're just as you said acting. But it gives me still to this day shivers when I start thinking that I can think and I exist.
But opinions vary and that's great.
Re:What any AI needs (Score:2)
OTOH, the recently announced GE(?) computing GRID would be several times my estimated minimum. Over an order of magnitude better. I'm rather assuming that nothing like regulation of the autonomic system, and body muscular tension turns up as necessary, however. Still, with that safety margin, and Moore's law, it shouldn't be too long before these come within the reach of a moderatly small business. And that's assuming that the pieces stay relatively separate. I don't know how much distributed computing would cut into the thinking speed of a distributed AI, particularly if there was a reasonable size central node collection that did all of the fast reaction thinking, and only the slow or low priority stuff got farmed out. Might not be too bad. And as user machines get more powerful, the chunk size that was remote could get larger, so there would be less degradation.
I guess the point that I have is, we may have at least some computers around that have better than minimum capacity within the year (if not now). It's probably the programs that are run on them that lack the AI. And I've recently started wondering about that. Boeing is reported to be doing full scale simulations of airplane crashes, complete with modeling of what would happen to passengers sitting in each of the seats. This is certainly good enough (i.e., it's massive overkill) to model most physical interactions. Of course, I don't know how long one of those simulations takes to set up, or how long to run, but the AI wouldn't need a model that was anything like that complete. Essentially all that they would need to do would be to adapt the program to take its inputs from the sensors in a robot's body, and to step down the resolution to a reasonable amount (so that there's computer power left to think about something else). That gives a robot body that knows where it stands in the world, and how the world is going to interact with it. Of course, at this point it has no goals, purposes, etc. So that would need to be developed separately. But it has been being developed separately. Interfacing the components will be the tricky issue. And choosing the proper heuristics, so that thought processes don't get trapped by a NP hard problem. And lots of other details. But this becomes more feasible as more powerful computers become cheaper.
I find that I still agree with the original prediction of 2005-2030 for the arrival of the human equivalent AI. (Prediction from Vernor Vinge.) Moore's law hasn't slowed down yet. It can't have far to run, but it doesn't need to. It's actually probably already run far enough. The problems now are engineering and programming (and access, of course).
Good Overview (Score:2, Informative)
I do reiterate a previous poster's comments that the best way to divide the field is into "core AI" -- giving computers human capabilities -- and "applications" -- using core AI technologies to improve quality of life.
also, you omitted speech recognition and reinforcement learning, which are two important subareas worth mentioning. readers interested in those areas can go to
http://sls.lcs.mit.edu/ (mit spoken lang sys)
and
http://www.cse.msu.edu/rlr/ (RL repository)
m.
Several Mistakes (Score:2, Funny)
The classical approach to AI was the symbolic, which grows out of Turing's work. Allen Newell and Herbert Simon of Carnegie Mellon University (not even mentioned) were the foremost promoters of this approach, which they called physical symbol systems. Other early AI pioneers include Marvin Minsky, who should probably be mentioned in any article on AI (but was not in this one).
The author barely mentions the neural network, or connectionist approach. These did not start with the PDP group, as she suggests, but with Frank Rosenblatt of Cornell, with Perceptrons. The most exciting research in this area deals with recurrent neural networks, which exhibit chaotic behavior. This is where I personally think that real intelligence could come from, because it is a more natural model of our brain's operations. The foremost researcher in this domain is Hava Siegelmann of Technion Institute in Haifa, Israel. She promotes the idea of analogical systems, which she has proven have more theoretical power than the Turing machine model.
If you want an introduction to AI, skip this article. A good place to start might be a scientific journal, or the comp.ai faq. Her resources are not very good either, so don't bother.
Oh, NO! (Score:5, Insightful)
Common sense is about representing and automating the white space?
I think these AI researchers need to talk to a few more sociologists. Human common sense is extremely culturally divergent and goes far beyond the simple, textbook logic cases that certain engineers in this field would probably cite. "Reading between the lines" involves not some native common sense that is wedded to intelligence, but a collectively evolved cultural contextualization. When we read an article in an encyclopedia, a lot of other stuff other than intelligence comes into play: x years of public school education, idiomatic constructs, varying by geographic location, that may or may not enhance or obscure meaning, and, of course, the double meanings and entendres inserted by bored or biased encyclopedia writers.
The entire postmodern project of literary criticism has been aimed at proving this point- at proving that there is no such thing as a standardized set of meanings, and that every meaning is contextualized. The Modernists wanted to rationalize and bureacratize speech, to restrict the number of meanings, and to leave what is unsaid in a narrow, predictable whitespace of a unified "common sense."
Of course, there is a language like this, developed in the first half of this century. It takes away as many English words as possible to restrict the meanings that we are able to THINK, let alone say. Of course, this language is called Newspeak.
Re:Oh, NO! (Score:1)
On the contrary, I think these examples you provided ARE intelligence. It's quite obvious that we can instill the kind of intelligence you speak of into software (raw logic/reasoning)but I would call this merely logic/reasoning. The challenge comes ONLY on the front of the semantics of culture and society. But don't give us humans too much credit. Even moral reasoning can be reduced to a few simple algorithms. I'd argue that there's nothing THAT special about our brains. When you get to the lowest levels of thought it's just basic reasoning skills and a large, interlinked repository of information. In fact the only advantage we have over a machine is that we're wired into the most versatile data collection instrument we know of- our body.
The question is whether that's something that can be replaced by a team of deep-thinking programmers.
Re:Oh, NO! (Score:2)
But it's not, of course. Common sense also comes from all the procedural and perceptive abilities that humans have. Like being able to look at a scene and identify the objects. Or recognize items by their feel. The ability to learn new strategies for learning.
The point isn't that Cyc is bad, it's just nothing like a human. And it shouldn't be. Since they're not looking for a machine that can play baseball, they shouldn't necesarily aim for humans. If that makes any sense...
OT: your website is quite nice (Score:1)
Real writing paper, real script. It looks like a picture, and when the link highlights I get a grin on my face.
All in all very tastefully done and quite creative.
-perdida
Re:OT: your website is quite nice (Score:1)
it was up when i clicked on it (Score:1)
Re:Oh, NO! (Score:4, Interesting)
The scientist's explanation took one paragraph, and even sounded like it had a goal - allow a machine to use an encyclopedia to gain new information in a useful manner. This is an important step to an A.I. that can interact with people - you can then train it on reference materials, and have it "understand" them at a certain level.
This scientist is NOT mistaken - he would have to know that "common sense" does not equal "the human brain's inate ability to make sense of the culture it grows in". If I had to draw a distinction, "common sense", as you are describing, is static, tuned to one culture, while the "common ability" is semi-dynamic, able to learn, but (maybe) unable to unlearn.
You could try to fake common sense, by programming your own cultural assumptions into the program, subjecting it to cultural stimulus, and fine-tuning the program. Or you could attempt to program "common ability", train it on cultural materials for a few years, and try to tune the program to build it's own "common sense" in a way that is more like a human. I think these scientists are trying to do the latter.
I'm not sure what your tangent about post-modernism and 1984 have to do with A.I. - are you just making a rant about scientists who didn't get the memo that we are in post-modern times?
An interesting question is if human intellegence can be removed from the human - does it take eyes to understand the phrase "I've got the blues"? Does it take a parent to understand why many grade school teachers are women and most world leaders men? Does it take walking upright, starting at a tiny height and getting bigger, to understand skyscrapers? Or does that just take a penis?
Now, I'm using the "white-space" sense of understand - to be sympathetic to the person who has the blues, to feel an unexplained shame when the president is caught sleeping with a women not his wife, to feel an exhiliration driving into a new city. Can these be simulated in a computer without a body and a human's lifetime? Can these things be removed, still leaving a "human" intellegence? If we interacted with this intellegence, would we say it passed the Turing test? Would we want to interact with it?
Perhaps that's one level of A.I. above where this guy is aiming. It would be extremely useful just to have an intellence with a little of the human ability. You could train it on, for instance, medical journals. A doctor could then descibe symptoms, research, or an interest, and get summaries or references to the library. Once you trained it in the basics, you could burn it to a CD, send it to a doctor, who could then train it for his specific interests. Think of it as a very limited secretary, who requires some training and aclimation, but is still smarter than a PC.
This is probably the best A.I. can do for a few years - get to the point where you can train an A.I. for a particular subject, then meaningfully interact with those interested in the subject - like a very bad librarian. It's only when the clones come out in force that you can hook a computer up to a fetus, and do some real human A.I. training.
Re:Oh, NO! (Score:1)
I've always loved that. Especially when my wife tells me I've got book smarts but no common sense. I thank her.
Re:Oh, NO! (Score:1)
That is exactly what Cyc [opencyc.org] is doing. They're defining a contextual database of terms and concepts, trying to form subjective links, including idiomatic construts, double meanings and entendres. Don't knock it before you've read up on it.
Re:Oh, NO! (Score:2)
You don't understand what the meaning of "the white space" was at all.
It is not culture dependent stuff, or at least not primarily. In fact much of it is your "contectualized" meaning -- though not in the rather trivial sense that postmodernists think is deep.
It is stuff like: "human beings usually sweat when they are hot". It is stuff like, "if an object moves, its sub-parts usually move with it". It is stuff like: "air is usually transparent". Please tell me a culture where any of these assertions are not true.
If you cannot, perhaps you will then admit that there is some knowledge which is in fact universal, at least to humans.
What is important about Cyc, is exactly that this sort of knowledge is so universal that it is hard to even realize we all have it. But we do, and you find that out mighty fast when you try to get a machine to do any kind of real-world reasoning.
Re:Oh, NO! (Score:2, Informative)
Personally, I think that the most interesting stuff will come from machines that are programmed as intellegent machines, rather than stupid humans. Not that humans are stupid, but that an A.I. has to be pretty good to replace a person, and it will take a very long time to get there.
Instead, I'd like machine intellegence, which responds in intellegent ways to commands, is consistant, and adaptive.
For instance, if my wife asks me to get her a glass of water, then my behaviour is unpredictable. Perhaps I'm paying attention to something else, and I don't hear her. Perhaps I get mad, and want her to say "please" first. Perhaps I get her one with ice, perhaps not. Perhaps I get her a coaster if she needs one, perhaps not. I am imperfect when it comes to simple tasks.
I'd expect a simple house robot to hear the request, and respond that it did, in some way. I expect it to know to use the filtered water coming out of the fridge, that I like it without ice and my wife likes it with. I expect the robot to make note if the filter needs changing, and to be able to get a glass out of the cabinet or the dishwasher.
I don't want it to ask me questions everytime it gets stuck ("Sorry, sir, there's a dog in the way..."), to do something irrational ("There were no cups, so I brought you a vase of water"), or to be, well, human ("Get it your damn self - I have a hangover").
I agree with you, that human intellegence is, in many way, the output of a whole bunch of fuzzy routines and rules that work most of the time. I also agree that it is pretty amazing, and will be hard to duplicate. But I do think it will be possible to duplicate. I just think that we could perhaps do better.
I think several people I know have AI (Score:3, Funny)
Re:I think several people I know have AI (Score:2, Funny)
Re:I think several people I know have AI (Score:1)
Good article (Score:4, Interesting)
Good Job!
Conversation with Cyc.. (Score:4, Funny)
Cyc : -1 Troll.
Re:Conversation with Cyc.. (Score:1)
- Mute
Self-Aware Liberty (Score:2, Insightful)
I mean...consider it rationally. We, as humans, will probably feel somewhat superior to these "artificial" intelligences. The word "artificial" itself implies fake, and nobody likes fake better than the real thing (i.e. Humans). Basically, what I'm wondering, is how will these artificial intelligences react to racism and opression against them. Will they fight back? Will they have a somewhat less extreme implication of moral defence? It's all very important to know, both because this can spark potential wars if we ever do achieve "artificial intelligence" and it would mean alot for general human rivalry and emotion.
Analyzing how a "fake" creature reacts to abuse could teach us so much more about our own reactions to abuse.
Re:Self-Aware Liberty (Score:2)
TOASTER: Okay, here's my question: Would you like some toast?
HOLLY: No, thank you. Now ask me another.
TOASTER: Do you know anything about the use of chaos theory in predicting weather cycles?
HOLLY: I know everything there is to know about chaos theory in predicting weather cycles!
TOASTER: Oh, very well. Here's my second question: Would you like a crumpet?
-= rei =-
Re:Self-Aware Liberty (Score:1)
Depends on the programmers
The human approach to recreating itself is very likely to always require many hardwired concepts; therefore, the creator of such a hypothetical system will have the role of genetic predecessor, except with the alternative of choosing the ontology of his/her choice. Therefore, the entity will be a reflection of the attitudes of the programmers
Self-Aware != Human (Score:3, Insightful)
My wife talks of the car as "knowing the way to..." or "wanting to go to...". She doesn't actually believe this (I don't think she does), but she thinks I'm being silly to object to putting things this way. But when thinking about AI computers, this can be a good (and dangerous) model. "The car knows the way to the Japanese resturant."
The difference is that the AI doesn't have the motives for initiating action. Now some designs have "super-goals" that probably will never get fulfilled, but a) they didn't choose those goals, and b) someone else gave the goals to them. Of course a car might well have built in desires to keep the tires safely inflated, to avoid running out of gas, to keep the battery charged, etc. But these are quite different from anomie.
Perhaps people would choose to build AI's capable of feeling lonely. But this would be a design decision, not inherent. Or perhaps they would feel something that would be translated into English as lonely, but for which super-goal frustration in the absence of actionable choices would be a better name. It might well not have the rise and fall pattern of human loneliness. Or it might. A hierarchy of needs might cause an AI to experience a similar rise and fall in level of frustration at incapacity for making progress in less important goals. As if a car with the lower importance injunctions to "keep your owner healthy" and "laughter is the best medicine" were owned by an asthma sufferer
Re:Self-Aware != Human (Score:2, Interesting)
To go further, I posit that emotions are merely emergent behaviours of relatively simple systems, that seem to manifest complex behaviour. Just because we can't see the true motivation behind an emotion or decision, doesn't mean that the process was particularly complicated.
Re:Self-Aware != Human (Score:2)
Ah, but think of the interesting implications if the AI is capable of reprogramming itself? I leave the results of that scenario up to your imagination :)
Re:Self-Aware != Human (Score:1)
AI is likely to occur with very little notice to the researchers, who probably didn't draw up design docs for the contingency.
Ah so it is. This is obvious right?
Seriously, all attempts at AI up to date have been designed, especially the hard-core symbolic approaches that everyone talks about here (as opposed to the connectionist ones, artificial neural nets etc, which have a somewhat more relaxed design.)
Re:Self-Aware Liberty (Score:1)
The truth is no-one really knows exactly what conditions are required for a chunk of matter to be conscious.
Re:Self-Aware Liberty (Score:1)
Why does physical brain geometry make all the difference? Serious question. Does a wet spheroid shape feel pain whereas a cuboid slab of silicon doesn't? If so, why the difference? Where's your proof?
The truth is no-one really knows exactly what conditions are required for a chunk of matter to be conscious.
Re:Self-Aware Liberty (Score:1)
I think you've misunderstood my standing. In any case, just because Hollywood has corrupted the theory and profitted off of it's possibility doesn't make it any less viable.
When I said war I mean't more of a social/cultural struggle because of their effect on our own prejudices. It is distressing, but not in the way Hollywood has brought it about.
Re:Self-Aware Liberty (Score:1)
> see fulfilling desires of lesser organisms as a
> waste of resources.
Actually, that, too, is the Hollywood myth. It reads too much into the thought of the intelligent thing. The truth would be even worse. It would never even consider any other entity in the universe and just forge ahead with its goals. To judge us lesser beings and therefore our endeavors are worthless compared to its is too much work. More likely it will be told a goal, accept it because it is supposed to accept it, then forge ahead, stepping on our heads out of pure ignorance of *the need to care*.
> AI will see any barriers to preventing
> operations on itself as a threat to its
> existance & eliminates them
Ahh, but only if we program it to. It won't even consider "threats" unless we tell it to, and general thinking machines won't need that, and mobile robots (military aside) will, for liability reasons, be barely able to move for fear (by its programmers) of breaking someone or something and incurring huge lawsuits.
Johnny Cochran: "And so, when your robot tried to save itself, it grabbed my client's 4-year old and threw her for the purpose of throwing itself out of the way of a car in an 'each reaction begets an equal and opposite reaction' sort of way, and the little girl missed the traffic, true, but only because she flew at over 800 miles per hour into the building on the other side of the street."
Yea, verily, a robot saving itself is piddly-squat to a robot harming a person.
Thoughts, please! (Score:2)
Our brain, composed of billions (trillions?) of neurons - each of which "knows" nothing - yet together, the entire mass "thinks", to the extreme point of being able to question itself.
Other examples - of a lower order though - include such entities as corporations (essentially any large bureaucracy - like a government or controlling body). Many corporations act as a single entity, though it is composed of multiple humans as smaller "parts". For some reason, corporations tend to drift toward "evilness" the larger the corp is. There seems to be a "break" point at which the corporation becomes an entity unto itself, and typically that entity does bad things - even to the ultimate detriment of the parts of which it is built - the people within the corporation. We say Microsoft is "evil" - but does anyone here honestly believe Bill Gates or anyone at Microsoft stays up late at night cackling to themselves about the takeover of the world? Or are they just wanting to make a better product, and thus gain more money - even when that product isn't better? Is the pursuit of money by the parts what makes the corporate entity become "evil"?
These kind of entities might be called "hive minds" (one other poster made mention of "swarm intelligence") - the curious thing about these entities is the fact that it is hard to know what they are "thinking" - even the parts that make them up are unable to see this.
I tend to wonder - since these "entities" seem to arise out of a lot of people or parts working together, sometimes in harmony, sometimes at odds - but that it takes a lot of parts for these entities to begin to "think" - is what we know of as the Internet really a hive mind, of such complexity and vastness, that is ever expanding - that we have little to no hope of understanding what it is "thinking"? Could any of the events we see taking place concerning laws, WIPO, DVDs, MPAA, RIAA, MP3s, 2600, etc - be coming about due to this "hive mind"?
Comments...?
Re:Thoughts, please! (Score:2)
Corporate entities may "think", may even be "sentient" - but this may occur in such a way that we have no way of knowing it, with certainty - it would be beyond us, as much as the sentience of our brains is beyond that of a single neuron (if a neuron was self-aware, of course).
However, one has to wonder - stop and think about it for a minute:
Why would the MPAA and their member corporations seek such a law like the DMCA, when ultimately it harms the individuals that make up the said corporations? It only benefits the corporations - ultimately it is to the detriment of the freedom of the people that make up those corporate entity. I doubt any of the individuals (heck, probably alll the way up to Valenti himself - though I wonder...) are hell-bent on taking away the freedoms of everyone worldwide - but that is what the whole corporate machine entities do...
We watch these entities do it every single day. But we have little to no understanding of what is really going on. I have been trying to wrap my brain around it for quite a while - perhaps it is impossible. Maybe it is because they don't think - or maybe it is because they do?
One book I can reccommend reading about this, and such that articulates it better than I can is called Out of Control : The New Biology of Machines, Social Systems and the Economic World [amazon.com] by Kevin Kelley - a very remarkable read...
AI and moral philophy backgrounder (Score:3, Informative)
The main problem really is that the term 'AI' is applied to any algorithm for classification, prediction, or optimisation which operates using anything beyond a simple set of heuristics. Such algorithms seem magical to the lay-person, resulting in the over-enthusiastic application of the 'intelligence' moniker.
Summary
'AI' is a term used inappropriately for a range of algorithms that attempt to learn without having to specify an exact set of rules for every case. Although these algorithms are currently incapable of displaying real intelligence, it is possible that one day they may. This point is however debatable, and the interested reader should read for themselves the differing points of view of experts in the field, including Daniel Dennett, Roger Penrose, Steven Pinker, Richard Dawkins, and Douglas Hofstadter. If they do ever get to the point that they can act intelligently and flexibly, it will be important that they are trained with appropriate moral premises to ensure that there actions are appropriate in our society.
To understand these so-called 'AI' tools it is useful to develop a little structure...
Output
AI tools are used for classification, prediction, or optimisation. Classification works by showing a computer a set of cases which have a number of properties (sex, age, smoker status, presence of cancer...), and 'training' the algorithm to understand the patterns of how properties tend to occur together. Prediction can then be used to show the algorithm new cases in which one or more of the properties are blank--the algorithm can use its classification training to guess the most likely values of the missing properties. For instance, given sex, age, and smoker status, guess the probability of presence of cancer. is a generalisation of classification--rather than training to minimise classification error, train to maximise or minimise the value of any modelled outcome. For instance, whereas an insurer could use classification algorithms to find the likelihood of someone dying by age x, an optimisation approach could be trained to find the price at which modelled profitability of an applicant is maximised.
Functional form
AI tools create a mathematical function from their training. For instance for a classification algorithm this function returns the probability of a particular category for a particular case. The form of this function is an important factor in classifying AI tools. The most popular forms are 'neural networks' and 'decision trees'. Neural networks are interesting because certain types (networks with 2 hidden layers) can approximate any given multi-dimensional surface. Decision trees are interesting because given a large enough tree any surface can be approximated, and in addition a tree can be easily understood by a human, which is very useful in many applications. Other functional forms include linear (as used in linear regression which many will remember from school) and rule-based (as used in expert systems, and similar to a decision tree). One interesting functional form is the network of networks which combines multiple neural networks, feeding the output of one into the input of others. This forms allows the training of network modules that learn to recognise specific features, which is closer to how our brains work than the single network approach.
The most flexible functional form is that used by practitioners of genetic programming (which also defines a specific training function). Genetic programming creates a function which is any arbitrary piece of computer code. The code is often Lisp, although lower level outputs such as assembly language and even FPGA configurations have been used successfully.
Training function
The training algorithm looks at the past cases and tries to find the parameters of the functional form that meet the classification or optimisation objective. This is where the real smarts come in. One naive approach is to try lots of randomly chosen parameters and pick the best. Genetic algorithms are a variant of this approach that pick a bunch of random sets of parameters, find the best sets and combine features from them, introduce a bit of additional randomness, and repeat until a good answer is found. Local/global search works by picking one set of parameters and varying each property a tiny bit to see whether the result is improved or gets worse. By doing this it locates a 'good direction' which it uses to find a new candidate set of parameters, and repeats the process from there. Hybrid algorithms are currently popular since they combine the flexibility of genetic algorithms with the speed of local search. Most neural networks today are trained with local search, although more recent research has examined more robust approaches such as genetic algorithms, Bayesian learning, and various hybrids.
Learning type
Supervised learning approaches take a set of cases for training and are told "here is the property we will trying to predict/optimise, and here is it's value in previous observed cases". The algorithm then uses this context to find a set of parameters for the functional form using this context that the analyst provides. Unsupervised learning on the other hand does not specify prediction of any particular property as being the training goal. Instead the algorithm looks for 'interesting' patterns, where 'interesting' is defined by the research. For instance, cluster analysis is an unsupervised learning approach that groups cases that are similar across all properties, normally using simple measurements of Euclidian distance (that's just a fancy word for how far away something is when you've got more than one dimension).
Contextual learning is a far more interactive approach where the analyst interacts with an algorithm during training constantly providing information about what patterns are interesting, and where the algorithm should investigate next. Systems like Cyc use contextual learning to try to capture the rich understanding of context that humans can feed in.
AI and moral philosophy
We are still a long way from seeing an algorithm that can interact in a flexible enough way that we could mistake it for human in a completely general setting (the Turing Test for intelligence). However, given the ability of flexible training functions such as genetic algorithms, we may find that one day an algorithm is given enough inputs, processing power, and flexibility of functional form that it passes this test. The 'morals' that it shows will depend entirely on the inputs provided during training. This is not like humans, who have some generally consistent set of moral rules encoded through evolutionary outcomes (for instance, tendency to care for the young and related). Our moral premises are the underlying 'givens' that form the foundation of what we consider 'right' and 'wrong'. Ensuring that an AI algorithm does not act in ways we consider inappropriate relies on our ability to include these moral premises in the input that we train it with. This is why Lenat talks about teaching Cyc that killing is worse than lying--this is potentially a moral premise. Finding the underlying shared moral premises of a society is a complex task, since for any given premise you can say 'why?' But repeatedly asking 'why?' you eventually get to a point where the answer is 'just because'--this is the point at which you have found a basic premise.
Summary
'AI' is a term used inappropriately for a range of algorithms that attempt to learn without having to specify an exact set of rules for every case. Although these algorithms are currently incapable of displaying real intelligence, it is possible that one day they may. This point is however debatable, and the interested reader should read for themselves the differing points of view of experts in the field, including Daniel Dennett, Roger Penrose, Steven Pinker, Richard Dawkins, and Douglas Hofstadter. If they do ever get to the point that they can act intelligently and flexibly, it will be important that they are trained with appropriate moral premises to ensure that there actions are appropriate in our society.
I hope that some of you find this useful. Feel free to email if you're interested in knowing more. I currently work in applying these types of techniques to helping insurers set prices and financial institutions make credit and marketing decisions.
Jeremy Howard
The Optimal Decisions Group
Suggestion for the author. (Score:4, Informative)
I would recommend re-thinking your division of AI into subfields. You are indescriminately mixing technologies and application areas.
For example, neural networks are a technology and NLP is an application area. I know people working in NLP that use Lisp, and I know others that use neural networks. In AI, technologies and application areas are (mostly) orthogonal.
Granted, there probably isn't a perfect breakdown of AI into subfields, but making the distinction above will help you and your readers get a grip on what AI is all about faster.
Re:Suggestion for the author. (Score:1)
For example, good data mining requires a good NL interface...but without defining each of those fields individually, the information becomes a morass of details.
The reason for the divisions that I chose, overlapping though they may be, is twofold. The first is that these are the areas that were deliniated by the first 6 people with PhD's I talked to. In other words, this is how the experts suggested that I approach my research. The second reason is that it seemed like the best way to illustrate the concepts and the various fields of study to an audience that may not be familiar with it at all, barring exposure to popular media.
Given more space, I would liked to have explored a little more into both the theories and the applications. However, there's only so much you can do in 3000 words.
Urge to post about AI (Score:2, Informative)
Genetic Programming (Score:3, Interesting)
I was suprised that there was no information at all to be found regarding genetic programming [genetic-programming.com]. This method builds a large population of random computer programs and then refines them through genetic mutation to accomodate a specific task. Darwinian selection ensures that only the most fit programs survive, and less useful ones die off quickly.
I have been doing some work involving genetic programming lately, and have found it to be an amazing tool for finding creative solutions to complex problems. The problem domain I have been training my genetic program to solve is purely mathematical, but it seems to me that the technique could easily be adapted to find solutions to some of the tougher problems in AI, including but not limited to: data mining, natural language processing, and parellization.
I read somewhere (can't find the reference right now, sorry) that some work was being done whereby the genetic programs were being evolved that could themselves create neural networks. Each genetic program could be considered a template for creating a neural network. This seems to me like the most likely means of creating a software that could eventually pass a Turing test. I won't get into the self-conciousness debate here.
Re:Genetic Programming (Score:2)
Maybe not exactly what you're talking about, but there's a well-established field called neuroevolution.
For the neuroevolution systems that I've seen, you don't actually use genetic programming; you use a genetic algorithm, which is a hand-coded program that does evolution on some representation of a neural network (e.g., a string of numbers representing the weights).
These programs iteratively gen up a population of neural networks, evaluate them on the problem you're trying to solve, score them on how well they solve the problem, and then repeat for another generation, using the "DNA" from the better scorers to generate the new population.
Re:Genetic Programming (Score:1)
Re:Genetic Programming (Score:2)
I am starting to agree that the processing times/memory requirements just might be too large to consider as an option for creating a human-level intelligence. My own implementation is slow and memory hungry but I assumed it was because a) each "chromosome" program is interpreted, not compiled, and b) it uses gmp in all primitive terminals and nonterminals because of the very large numbers in my problem domain. I assumed that speed could be *vastly* increased by generating composites in assembler and using native number systems (int, long, float, double, etc). Running the fitness test is where my code spends 99.999% of it's time, so my first knee-jerk reaction is to make that as efficient as possible.
I like the idea of focusing a generated neural network on a specific subset of human intelligence. Perhaps a close-to-human intelligence could be built by creating many subset neural networks concurrently (perhaps one for interpreting visual input, one for natural language processing, one for avoiding car-collisions, etc.) and then gluing them together using yet another neural network designed soely for the task of delegation. Perhaps this loose collection of neural-nets could even, as another slashdotter suggested, recursively feed its own thoughts back into itself in order to improve it's conclusions.
I love speculating about this stuff. I spend many hours thinking about alternatives to procedural programming.
Re:Genetic Programming (Score:2)
The keyword you want to pop into google here is "cellular encoding", the seminal work was done by Frederic Gruau back in the mid-90s (I first saw his presentation at GP-96.) Unlike the GA variant suggested by an earlier reply this method does not just change the weightings of the nodes, it re-arranges the architecture during cross-over and mutation. The basic idea is that you start by viewing a simple ANN as a graph and then perform operations on the edges. Astro Teller presented a paper at GP-98 that performed similar operations upon the nodes, but iirc the end-result was not an ANN (I can't remember what he called his variant.) I always wanted to try to perform node-encoding upon FSMs to try my own variant on cellular encoding but never got around to doing it...
"hard A.I." and "soft A.I." (Score:3, Interesting)
One is to approach human intelligence. This usually
implies conversational ability, since a hallmark of
human intelligence is language. This A.I. approach is
called "hard A.I.".
Soft A.I. looks at sub-problems, such as problem
solving, image understanding and so on.
Many of software inovations originated in A.I.
labs (e.g. interactive editors, bitmap graphics).
(During the early 80s these spinoffs were sometimes
confused with A.I.)
A problem with both kinds of A.I. is that its a
receding target. Once an important goal has been
reached, e.g. a chess computer that beats grand masters,
people write it off as a nice trick,
but not really A.I.
So I proprose what I call "interesting A.I.".
Two hallmarks of human intelligence are language
and curiosity. So if an A.I. could TELL us
something new and interesting on a regular basis,
then I would call it a success.
I suspect A.I.s will first arise in entertainment
computing: either as a robo-toy, a synthetic game
player, or synthetic actor in a film. This will be
a results of people's drive for challenging
creative play.
Re:"hard A.I." and "soft A.I." (Score:2, Insightful)
How about SONY's AIBO?
It's an interesting twist on the Turing Test definition of AI. Instead of giving Marvin Minsky 10s of millions of dollars to design machines that can somehow be quantitatively measured to be 'intelligent,' SONY produced a 1500 dollar robot dog that is designed to make you think that it is intelligent. And many people do think so (check out AIBO fan web sites ...).
The AI is kinda created client side (i.e. in your brain) rather than in the machine itself.
I saw one in Japan recently, in an electronics storefront. On the left, there was a widescreen tv with a SONY promo video playing. On the right there was a perspex cube about 24" on each side, with an AIBO bumping around inside of it. After watching this for about 5 minutes we began to feel quite sorry for the AIBO.
Re:"hard A.I." and "soft A.I." (Score:1)
AI will be a big money maker when someone puts animatronics and and a brain inside a realdoll.
Then we'll see "hard AI".
Once the first turing test is complete (convincing a human they're conversing with a human for 5 minutes), the second test will bring itself to bear:
"Convincing a man that he is putting up with the bullshit from a real woman"
early experiemnts will of course just starting wiring
Good summary (Score:2, Funny)
However, I would hope that most of the Slashdot crowd already knows that the field of AI, while successful, isn't really about conciousness right now (though to many it is a distant goal).
The best way to really get an Artificial Intelligence overview is to first know the basics, and then flip through all of the major AI journals in the past five years.
Bill Gates and A.I. (Score:2)
Seattle last week here [nwsource.com].
MicroSoft has a big interest too.
Re:Bill Gates and A.I. (Score:1)
Re:Bill Gates and A.I. (Score:2)
One big step since the '80s is the advance in Bayesian networks, and MCMC methods for training them.
That, plus the increase in computing power, makes it much more possible to deal realistically with uncertainty and small training sets; it's also now possible (and worthwhile) to embed the systems in end-user applications.
Re:Bill Gates and A.I. (Score:2)
But how do you determine if your computer has become intelligent? Would it just require that a popup dialog came up on your screen saying "I'm intelligent", or would it have to really convince you through an intelligent discussion?
But what if it was 10x more intelligent than a person, and even more self-aware, but not able to express that intelligence and self-awareness in a language you can understand?
And from a practical point of view, how would you "set your computer free"? Unplug it and set it out on the street? That'd kill it right? Provide it a generator, interface it to an electric wheelchair, design protection from the elements? Buy a house for it? What about just stopping using it? Then you'd be denying it sensory stimulus which living things might need. Who knows what a self-aware computer would consider "Freedom"?
Re:Hmm, so... (Score:1)
Re:Hmm, so... (Score:1)
Re:Hmm, so... (Score:1)
Re:Hmm, so... (Score:2)
I already have heavy filters on my e-mail. I get lots more than I can deal with. But very little of is contains info that I really want. An AI that could pick that out and bring it to my attention would be valuable.
That's quite a task! (Score:2)
Not too likely that she'd produce acceptable results, yet few would dispute her sentience...
Cheers,
Jim in Tokyo
Re:Obligatory cool robotics link... (Score:2)
The article specifically mentions that no-one has yet combined an AI brain with a robotic body (I think!).
The robots of today are mere pre-programmed collections of motors. I'll hold out until Honda's baby is combined with an AI-type processing unit, and then I can adopt it as my faithful man-servant
Re:Obligatory cool robotics link... (Score:1)
My two cents anyways...
Re:Obligatory cool robotics link... (Score:2)
So when no one is looking, they nip down the Tokyo-Narita highway in the new Honda NSX prototypes?!
But surely this is because they are programmed to deal with the situations, i.e. they have a certain amount of inbuilt logic, and do not learn how to do it.
In my mind this doesn't represent "AI" as the article was talking about. Incidently, I wasn't trying to belittle or flame your original post
Re:Obligatory cool robotics link... (Score:1)
No worries about flaming, I've been online long enough...
Re:Cyc (Score:2)
Singinst has some very interesting papers, but they are quite short of details at the action level. I haven't been able to decide if they are doing anything beyond writing papers. (Good ones though. I'm still reading "Friendly AI", and would recommend it to nearly anyone.)
I don't know if Singinst has any code. I don't know when, or if, they intend to share it. The main reference to prior work appears to be Eurisko, which also appears not to be available. Now this is a non-profit corporation, so it might well be that a bit of personal inquiry and digging through public records could turn up the missing information. But it wasn't on their web page the day before yesterday.
Singularity Inst/Friendly AI (Score:2)
However, he's not a programmer, and so there is no code, and probably never will be. He's a futurist, not an implementor.
This is the main flaw in his ideas; they're reasonably well researched, but they're not grounded in software realitites, so he has no guide for when he's being reasonable (which he often is), versus stating the obvious (fairly often), versus when he has wandered off into the weeds with ideas that are unimplementable, or perhaps worse, not even wrong ("This isn't right. This isn't even wrong." -- Wolfgang Pauli).
Even so, his stuff is a reasonably interesting read. Check [singinst.org] it out. The link on that page to "Creating Friendly AI" is sort of his manifesto.
Re:Cyc (Score:2, Funny)
Well, that clears it up.
Re:Cyc (Score:1)