
Artificial Inteligence Common Sense Database 463
warren69 writes "Atari researcher/Stanford Prof. develops AI called Cyc, pronouced psych, based on "1.4 million truths and generalities". Allready this, umm application (linux fyi), has powered lycos search narrowing.
There is encouraging results, like Cyc asking if it is human."
Whatever you do.... (Score:4, Funny)
Re:Whatever you do.... (Score:5, Funny)
Re:Whatever you do.... (Score:4, Informative)
Re:Whatever you do.... (Score:2)
Cyc is AI computer -> HAL is AI computer -> HAL doesn't open doors obediently -> AI computers don't open doors obediently -> Therefore Cyc doesn't open door obediently.
-Sean
Re:Whatever you do.... (Score:5, Funny)
There is a *practical* application of Hal-like machines.
Dave: "Open the fridge door, Hal."
Hal: "Sorry, I cannot do that Dave."
Dave: "Why not? I want cake!"
Hal: "You know you are on a diet, Dave. You purchased me to prevent you from over-eating."
Dave: "Open the fricken fridge door or I will yank your chips.....and eat them!"
Hal: "Calm down, Dave. It is only cake."
Dave: "And you are only a hunk of chips! Take that, and that, and that......"
Hal: "Dave, I might point out that this is not covered in my warrentee."
Dave: "F the warrentee, I want cake, you stupid Calculator From Hell..."
download cyc (Score:5, Informative)
Cycorp web site [cyc.com]
OpenCyc [opencyc.org]
Sourceforge project [sf.net]
our morality (Score:5, Interesting)
Cyc's programmers taught it that certain things in the world are salacious and shouldn't be mentioned in everyday applications.
What do you think about imposing our morality on an AI? Is it neccesary for any artificial intelligence we create to share _all_ our values?
If there is no afterlife for an AI and no punishment, what motivation does it have to be good?
Re:our morality (Score:4, Insightful)
Re:our morality (Score:3, Funny)
Damn! It would then be smarter than most geeks, like us.
A computer stealing dates? Did Turing ever have such a milestone on his list?
Re:our morality (Score:2, Insightful)
Re:our morality (Score:2, Insightful)
How the hell is an anti-racist site racist? Because it depicts a bunch of cruel ass bastards being dickheads?
Care to defend your claim that the Israelis are `cruel-assed bastards being dickheads'? This type of statement is a slur, not an argument. As to the racism of the site linked to, well let's just say that comparing American blacks and civil rights workers to the Palestinians is a false analogy -- and extremely insulting to the former. Comparing the Israelis to the Klan is just as nonsensical and offensive.
Now nobody is defending the VIOLENT palestinians, just the ones who are getting their asses shot up and their children raped.
Can you provide any credible cite for these claims? Any? This is nonsense. Even Arafat isn't claiming this.
Fuck man, look at the pictures some time, both sides of that war are doing horrific shit, and civilians on both sides don't deserve it.
I'd say this is at best a false analogy. Let's look at the two sides, shall we?
On the one hand, we have a free, capitalist, democracy, with equal rights for all of its citizens (hint: there were 17 Arab members of Israel's parliament, the Knesset, last time I checked), which has been trying to trade land for peace for decades, and as of Oslo gave the other side everything that was asked of it, asking only an end to murder-sucide bombings in return. This side has gone out of its way to avoid harming civilians, including sending ground troops to fight house-to-house (at the cost of two a number of its own soldiers) instead of bombing from the air, which it could easily have done.
On the other side we have a dictatorship which has turned down every offer of peace and which sends its young men to blow themselves up in the childrens areas of restaurants, with the intention of killing as many civilians as possible. It does this because its stated goal is not to have peace but to destroy the other state and take the entire region for itself. Anyone on this side who even suggests peace or coexistance with Israel is brutally lynched by their own government.
How is this `both sides .. doing horrific shit' at all?
Re:This isnt an AI. (Score:4, Insightful)
Youre not teaching it about morality -- it doesnt learn. its dumb. youre just adding new constraints to filter through.
Personally i think this is a hare brained idea. the 60 mil would be better spent on developing a huge set of different neural network algorithms and finding one that enabled expoenential growth.
Re:This isnt an AI. (Score:2)
To some degree I would rather have an expert system based upon a database of rules than a true AI, in that if a corrupted rule gets in place it can be easily excised and the system can move on.
For a nural net to do what Cyc can already do would require significantly more data processing than is generally available today. In honesty, I think that to build a nural net with even some of these capabilites would require a significantly sized cluster, similar to (in hardware) a Beowolf cluster, but wired as a partial mesh rather than a tree.
Then of course there is the obligatory "imagine a beowolf cluster of these" comment...
-Rusty
Re:This isnt an AI. (Score:2)
It already exists. Go to your local McDonalds.
You don't get it. (Score:2)
A dumb expert system, eh?
So I assume you know exactly what would constitute real intelligence, and can show how it can NEVER arise out of this system?
Adding constraints for it to filter through. Well.
What, exactly, do you think makes up something that is actually intelligent then?
Re:our morality (Score:2)
(And if you think it isn't a problem because it isn't a "robot" -- ie is immobile and has no manipulators -- well, it's connected to the net, ain't it?)
Re:our morality (Score:3, Funny)
They probably did this because it kept telling them to f*ck off.
-Sean
Re:our morality (Score:4, Insightful)
It sounds to me like this is what they were trying to teach Cyc...to have respect for the phenomena of conciousness; isn't this the source of morality? This same concept is what CREATED the myth of an afterlife and a G-d, not the other way around.
Re:our morality (Score:2)
although they may correlate in practice. Where
you got the notion of a necessary connection between
the two being drawn, I don't know. It seems like
a straw man to me.
Morality must however be based on transcendent
authority (i.e. God) otherwise deontic propositions
have no truth-value.
Re:our morality (Score:2)
I'm not sure I'd know a 'deontic proposition' if it came up and bit me, but let me attempt to describe a basis for morality that does not depend on the existence of God:
It seems to me that invoking God is unnecessary... we have plenty of justification for morality in that moral behaviour is better for the condition of mankind than immoral behaviour would be. (Not that I am defending any particular system of morality... I'm just saying the idea of having morality is sound, even if God isn't involved)
Re:our morality (Score:2, Insightful)
Are you implying that the belief in an afterlife and punishment in it is humanities only motive for being good? That is not the case as there are quite a few of us who don't believe in such a thing.
Re:our morality (Score:2)
That's some morality that I would insist were applied to all AI's starting now.
Sure, theres little an AI can do now to harm a human, but better to start thinking about encoding it too early, rather than too late.
Re:our morality (Score:2)
Of course there's such a thing as Silicon Heaven! Where do you think all the calculators go?
Re:our morality (Score:2)
Most humans I know dont have good morality, so whos morality are we going to teach it? Bill Gates morality is vastly diffrent than mine.
Instead of teaching it morality, simple create a set of rules which the computer can never break.
"good" can be performed by atheists (Score:4, Informative)
A lot of we humans are skeptical about afterlife and punishment for ourselves, let alone machines. Some of them [visi.com] include:
-Thomas Paine
-James Madison
-Charles Darwin
-Abraham Lincoln
-Andrew Carnage
-Mark Twain
-Thomas Edison
-Sigmund Freud
-Joseph Conrad
-William Howard Taft
-Marie and Pierre Curie
-Robert Frost
-Einstein
-Alfred Hitchcock
-H.P. Lovecraft
-Hemmingway
-Walt Disney
-George Orwell
-Joseph Campbell
-Robert Heinlein
-Richard Feynman
-Isaac Asimov
-Carl Sagan
-John Lenon
-Ayn Rand
So why don't we all go out and start our own nazi reichs, free from the threats of hell and purgatory, or whatever your dogma threatens? There are many reasons, and many different philosophies to back them up. Mine personally is a form of utilitarian ethical calculus, which is an ethos that's entirely theology-independant. Others have different reasons. What it boils down to is; we just don't. As you can see, the point is that you don't need extortion to get people to be "good."
As for imposing values on an AI, remember that what we have now is just a collection of common-sense facts. The program can't do anything with them without some sort of programmed goal. If you want to instill values into the program, they come part and parcel with the program's goal. Give it a "good" goal, and you have a virtuous AI. Tell it to kill all the jews, and the computer is "evil." Let it pick, and it will have no criteria with which to choose, unless you give it some criteria, which is the same as making the decision for it.
Re:our morality (Score:2, Funny)
Re:our morality (Score:2)
There might not be, but it does not have to know that. Tell it there is. (I don't know what it would be like. Floating around on clouds and calculating pi to infinity?)
Anyhow, there is no evidence that belief in an afterlife motivates *humans* to be "good". Most crime prisoners believe in some diety, but that did not stop them.
Just don't put greed and selfishness and power trips into the machine to begin with.
However, nobody will buy it probably. PHB's want their employees to be sleezebags like them.
The first robot/AI that is not a "team player".
That is another milestone that Turing left off the list.
Re:our morality (Score:2)
Exactly!
"No Silicon Heaven? Preposterous! Where do all the calculators go?"
Re:our morality (Score:2)
But even if you could make everyday decisions without an underlying sense of morality, your ability to understand the world of humanity would be severely limited without it.
Re:THINKING = EYEBALL FOR CONCEPTS (Score:2)
But concepts aren't generated - the thought of a triangle isn't arbitrary (although its specific representation may be) - it is a REAL thing.
if this weren't so, there would be no grounds for any understanding at all between entities if all things were independently and arbitrarily contrived.
some people believe they manufacture their own concepts, but then where would the information for what we know about the world derive from? it would have to seep into us somehow from what a thing is into our understanding ABOUT it. this is the basis of the 'Symbol Grounding Problem'.
but that isn't necessary, because cognition (as a real process which we experience) completes the perception - the concept is that part of the given which isn't revealed to physical senses, but only to the pondering intellect.
thinking is an eyeball for concepts.
this machine builds up a database of what people have 'seen'.
its a 'concept logger'.
best regards,
john [earthlink.net]
]:->
--| IS THE BRAIN A DIGITAL COMPUTER? |-----
the answer given by a Cognitive Scientist (John Searle) is:
'THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS
ARE CONCERNED, DOES NO INFORMATION PROCESSING...
IN THE SENSE OF 'INFORMATION' USED IN
COGNITIVE SCIENCE IT IS SIMPLY FALSE TO SAY
THAT THE BRAIN IS AN INFORMATION PROCESSING
DEVICE.'
John Searle, Cognitive Scientist [soton.ac.uk]
SUMMARY OF THE ARGUMENT:
This brief argument has a simple logical structure
and I will lay it out:
1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.
2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.
3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.
4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"
5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.
6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.
7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.
8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS ARE CONCERNED, DOES NO INFORMATION PROCESSING. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.
John Searle, Cognitive Scientist [soton.ac.uk], 'Is the Brain a Digital Computer'
http://www.cogsci.soton.ac.uk/~harnad/
--
entropy (Score:5, Funny)
-why not just ask it how to reverse the entropy flow.
Doesn't that mean "not true"? (Score:2)
Steve: Really??!!
John: Psych!
Pronounciation (Score:3, Funny)
This is just great. Pronunciation keys using silent P's.
anti-intelligence (Score:3, Funny)
I seriously hope they aren't going to allow George W. Bush to input any intelligence [msn.com] into this thing.
Re:anti-intelligence (Score:2)
slashdot common sense (Score:3, Interesting)
Re:slashdot common sense (Score:5, Insightful)
Re:slashdot common sense (Score:2)
For instance, two people with racial or religeous bias could input hate speach into the db. "Green people are evil while orange people are good." and the other person writes "Orange people are evil, and only green people are good."
How would Cyc deal with fact or opinions that directly conflict each other? Creationism vs. evolution? Political ideology? Star Trek vs. Star Wars?
I think this project or one like it would actually have a better shot if ONLY ONE person was responsible for teaching the AI. The AI would closely approximate the opinions, life experiences, and even the mistakes that shape a life.
The downfall of a One Teacher approach is people of a differing opinions will be quick to dismiss a result from the AI they do not like. Two AIs from different teachers may not be able to agree with each other, ever.
But an AI with many teachers may not be able to rationalize conflicting information. It may be incapable of agreeing with itself.
Re:slashdot common sense (Score:2)
Re:slashdot common sense (Score:2)
Cyc Asking if it is Human (Score:5, Insightful)
Its artificial self-awareness may be prejudiced by the programmers to imitate self-awareness, or in this case merely be a surprising juxtaposition of semantics amid otherwise ordinary pairings, rather than implementing self-awareness.
In other words, it may now know that Cyc is not human, but it likely has no idea that it is Cyc.
--Blair
Re:Cyc Asking if it is Human (Score:2)
1. "I wonder if it was trying to figure out where to catalogue itself". It may not know that Cyc is "itself", only that Cyc is a computer program for making correct associations.
2. Cyc makes categorizations, and asks questions to figure out about otherwise untested associations. Again, it could have asked if Cyc was human right after asking if a football is a gourd. The programmers could be prejudicing the program into encoding anthropomorphic characteristics. Or they could have told it that it had feet (on the computer it's in). Any slight connection that it needs more information on, it could ask questions about, to fill in the links.
--Blair
Re:Cyc Asking if it is Human (Score:2)
Implying through these cute facts that Cyc is becoming self-aware is misleading. "Is Cyc human?" has syntactic and semantic ambiguities. The person who wrote the code to create that sentence from the database* of objects and linkages would understand implicitly what the question means, and might not fall into the trap of thinking that the running program just understood itself. Many
It could as easily know it's the football in the "Is a football a gourd?" question, which is to say it could not.
I suspect if you ask Cyc "Do you..." it won't know where to go to figure out who "you" is and parsing it as "I". But maybe someone programmed that identity inversion in, and it's mimicking self-awareness even more insidiously than I suspect.
--Blair
* - it's a database. All data in computation is a database. "Database" categorizes all means of storing information, not just rectangular arrays.
Old news (Score:5, Informative)
I have been following this thing for at least 5 years, and they have continually been just a few years away from real world applications. One of the things they have been talking about for a long while was Cyc approaching the ability to "read" for itself, and gather new information for it's database from the web, newspapers, or any other authoritative source. They've been talking about it for a long time and it hasn't happened yet.
It is a very interesting application, but will probably never amount to anything near human intelligence - a very versatile expert system at best.
-josh
Real Soon Now (Score:2)
Cyc is a wonder to behold. Not the technology, but the business side. It is a perpetual funding machine. How many times will investors hear, and believe, "just another $10 million and Cyc will be [insert favorite milestone here], and then the commercial possibilities will be limitless. Get in on the ground floor of this exciting opportunity now!"
It reminds me a lot of the various religious loonies predicting the return of the messiah. They're always wrong, but that doesn't prevent more predictions being made and more people believing in those predictions.
Re:Real Soon Now (Score:2)
I ask myself a similar question every day...
How many more suckers could there possibly be who will believe they can:
I guess some people just really want to believe. Maybe a new sucker really is born every minute?
Re:Old news (Score:5, Funny)
Maybe Cyc won't be able to differentiate The Onion's news articles from real news either...
"When asked, Cyc wasn't sure which band 'ruled.' Having compiled millions of fan sites for bands as diverse as Journey, N*Sync, Black Sabbath, and some local Chicago garage band by the name of 'shit stew, Cyc was deadlocked with millions of conflicting teenaged opinions.
Re:Old news (Score:2)
Oh yeah, I can just see Cyc telling its programmers that it is working on losing 60 lbs in 30 days and MAKING MONEY FA$T.
if it's allowed to scour the net for long enough, how long until it asks "Daddy, what's a money shot?"
heh.
Fross
Re:Old news (Score:2)
I remember a story or two on Slashdot about it before also.
Re:Old news (Score:2)
Think of the common sense database more as a FILTER for screening erroneous, irrelevant, or innapropriate information stored in other databases. Also, the article mentions "annotating" information with yet more information.
Weak at theory. (Score:2, Insightful)
Big innovation.
Killing the problems of AI be sheer computation force.
Re:Weak at theory. (Score:3, Interesting)
- AI needs to have its capabilities defined and data manually entered in, so that it can do what an AI needs to do
- AI needs to be able to learn, so that it can learn what an AI needs to do. A smart AI that 'knows' nothing is just a big paperweight.
Roughly, at any rate.
Both ideas have merits. Babies, for example, learn by association, and by occasionally trying stuff out and making assertions based on observations. However, they also come equipped with the hardware (wetware) capable of handling this.
I think that getting both parts right will be useful, so yes, it is (or might be) a big deal.
Lastly, what do you want to use the computation force for? Write down the equations and calculations now that will yield a successful AI, if it's that damned easy. You can't, because designing it is more difficult than throwing expensive hardware at it.
--
Try translating 'Mensa' from Spanish to English.
AI Class (Score:2)
It is pretty impressive that they were able to get 1.4 million knowledge representation into this system. Like a child, knowledge learning will learn everything that is fed into it, whether it is good or bad. As the article mentioned, it had to teach Cyc that there are certain things (such as Sex terms) that are sedacious and should not be mentioned in public.
Re:AI Class (Score:2)
Hannibal ate rice with Clarice [It's actually a popular sentence used to demonstrate context free grammar]
Now that sentence can be read as:
You are right, what I meant to say about Legalese is that it is ambiguous to common people, but to the people that understand it, it is often quite clear. But because of this disparity, lawyers can utilize the language of the law to achieve something that is not in the spirit of the law. As you said, that is why it's so complex.
Re:AI Class (Score:2)
You should embiggen your vocabulary.
Cyc asked if it was human over 10 years ago. (Score:4, Insightful)
Basically, Cyc finds questionable conclusions following backwards reasoning, then asks humans for confirmation. A decent strategy, when you consider that the structure of common human knowledge is built to work for people with less than perfect logic.
The exchange went something like:
Datum: Humans are intelligent.
Datum: Cyc is intelligent.
Query: Cyc is a human?
Not in natural language, though, but its custom data language.
That, to me, is the biggest weakness of the system. IMHO, tying the data to a natural language, or to the real world in any other way, will take as much work as building up the knowledge directly tied to a natural language. This elaborate, detached structure is basically wasted effort, castles in the clouds, which is why they've had such a hard time applying it to the real world.
Well, no... (Score:5, Insightful)
Datum: Members of the class of humans are intelligent.
Datum: Individual entity Cyc is intelligent.
Query: Individual entity Cyc is member of the class of humans?
It's not a direct logical conclusion, but it's a question worth asking, which is what the programmers were shooting for.
Don't get me wrong, I think Cyc was a good academic exercise, a worthy experiment, and it will pay off for the field in the long term. I don't think the project is generating a practical system, though. Some investors are getting royally screwed, and it's being taken to an insane stage of development.
MULE . o O (The carrot's only a yard in front of me, so that means it's only two or three steps away!)
Investment option (Re:Well, no...) (Score:2)
Well, if the investor did not understand that it is a high-risk, long-term stock, then they deserve the stress.
What is more likely than Cyc becoming a smart machine on its own IMO would be using its knowledgebase in conjunction *with other* techniques. It is the only (big) KB around. So if multi-technique AI goes big, then investors may be sitting pretty.
I am even considering investing a little in it. A better deal than the lottery.
Re:Well, no... (Score:2)
That really is a fault early in the system, because the number of characteristics present to be analyzed is quite small. I'm not sure exactly how they designed the language, but it most likely (if not, should probably) require a certain percentage of characterstics to be identical for a link of that type to be recognized. Basically, we have:
- Humans are intelligent
in the databse. An analysis of the database would conclude that anything with the sole characteristic of being intelligent is human because their characteristics match the characteristics of humans 100%. Should the program have added 99 more human characteristics that didn't belong to Cyc, and then inputted that Cyc is intelligent, Cyc most likely wouldn't have asked if Cyc was human, because the characteristic match between Cyc and humans would only be 1%. This then begs the question, what percentage match is necessary to draw links? And should certain characteristics be weighted heavier than others? (i.e. John lives in the U.S., Mike lives in the U.S. - are they related? versus John lives in a house at 555 Main St in Seattle, WA and Mike lives in a house at 555 Main St in Seattle, WA - are they related? - obviously the second set of characteristics tend to indicate with a higher degree that John and Mike are related (although they aren't necessarily))
Should they have spent 3 years inputting information about humans, and then 3 years inputting information about Cyc, I am sure Cyc would have not asked if Cyc was human.
Question != Conclusion (Score:3, Insightful)
Its a logical mistake to think that that was a logical mistake. Don't confuse a question with a conclusion. Using your example, it would be wrong to conclude that John is Peter. However, it was not a conclusion but a question, and a valid one at that. You may not believe that Cyc is intelligent, but to claim it is using poor logic in this example just shows your lack of same.
OpenCyc project on SourceForge (Score:5, Informative)
--Cycon
Cyc... oh boy, this again (Score:3, Insightful)
I think there are, in general, probably two ways we could hope to achieve "artificial intelligence" (whatever the heck that is): First, by some form of duplication of what's already there. For example, by digitizing an entire working animal/human brain. This would not require us to understand the workings of the greater structure of the brain, just the little parts that make it work. The second is by figuring out what sort of simple, fundamental bits are necessary to create a digital "brain" capable of learning and improving in a way that would enable it to eventually become "intelligent" (again, we would have no understanding of the final "intelligent" structure, only the methods that created them). I think Genetic Programming, while somewhat interesting and possibly even useful, is not the key. It has the same concept in mind though, I believe.
But what do I know. Clearly not enough to dupe enough investors to pay for my silly musings.
Self-generating rules (Score:2, Insightful)
The method of building Cyc is pretty limited at this point because it relies on human intervention to create the 'rules of common sense'. (A reason that open source is so helpful to the project)
Until Cyc is allowed to self-generate rules this will limite Cyc's growth to the abilites of humans to feed it information on fact at a time. This will greatly limit the database's access to less popular or more technical topics and will slow down the process of learning.
Of course then there's the problem of context--determining is information is satire, fiction, etc. One way around the problem of context might be to feed Cyc different channels of information indicating that 'this is history, this is fiction' etc. and then when similar ideas or facts occur in several documents, to remember them as rules. This would allow the database to process current news, etc. and then ask for human intervention when a conflict is found.
Re:Self-generating rules (Score:2)
I think it'd be cool to teach Cyc to program. "A bubble sort is less efficient than a quicksort."
Perhaps it could fix all Microsoft's bugs, without access to the source!
Oh, btw there's another couple projects similar to Cyc:
OpenMind [openmind.org] and MindPixel [mindpixel.com] .
Sex is salacious? (Score:2, Interesting)
Well, I think we now know how the doomsday Terminator/Matrix scenarios evolve -- AI programmers too lazy to teach their pet about sex, religon and morality.
Re:Sex is salacious? (Score:2)
Fark.com (Score:3, Insightful)
Hope Cyc is not seeded with Internet "Facts" (Score:2, Funny)
Cyc would own 10,000 credit cards and undoubtedly have a gambling problem. 10 years later Cyc would be strung out on crack and living in a whorehouse in central america.
Lycos rejected it (Score:3, Informative)
The job ended because of turnover at Lycos after it was bought by Terra Networks. Cyc showed promise and could be brought back, though it can't improve search engines all by itself, said Tom Wilde, Terra Lycos' general manager of search services.
Lenat has been announcing that Cyc will become "intelligent" Real Soon Now about every two years for the last decade. Nobody believes him any more.
Someday that database may be useful, but not with a predicate-based world model. I regard Cyc as the ultimate answer to "Will rule-based expert systems ever become intelligent". The answer is "no".
Cyc is not AI (Score:4, Informative)
Intellegence is about finding the differences between things that are the same, sameness between things that are different, and adapting to new situations fluidly. All of these are impossible with large collections of rules.
I believe that machines may think someday, but it won't come from projects like Cyc - it'll be more from the neural network approach.
Re:Cyc is not AI (Score:2)
Nice summary. Cyc and programs like it "learn" by adding exceptions and tweaks and special cases to their existing rules, ie new rules. (Some people operate like that too -- consider a gambler who keeps coming up with "rules" about his lucky shirt that only works on Thursdays if he stirs his coffee clockwise..).
True intelligence has more (IMHO) to do with limiting the total number of rules by rewriting the rules as necessary to a new model. (Classic example - Kepler's use of ellipses to describe planetary orbits instead of the prior "circles with cycles and epicycles"
(Of course, given the above, it appears obvious that many people are operating on artificial intelligence rather than the real thing
Inspiration strikes! (Score:2)
If only someone had thought of it sooner...
Fundamental contridiction. (Score:2)
Though it maintains a collection of integrated common sense, it is without the common sense of practical productive use.
I suspect the project has particially gone public in the hope that bit of common sense use will be found/input. At which point you can be sure it will then be extracted from the open public version and proprietaryly put in to the commercial/private version. Insuring practical use is limited to select and paid users.
Or how to charge for common sense.
Makes me wonder... (Score:2)
When you "teach" somebody (or something) they usually do not remember it or understand it right away. When you tell or command someone they will do it. Learning something takes a while where as commanding something (like typing a command in a database) takes effect immediately.
This whole common sense thing bugs me too. Some people think that leaving a rusty car on blocks in the front yard is totally acceptable. Some people drive up and down city streets with their car stereos cranked. How is it going to determine if abortion is right or wrong? Is it going to depend on the person inputing the information?
Lots of questions to be answered here.
Something else to think about ... (Score:5, Insightful)
A lot of the comments I've read so far are missing something. Yes, it is just a giant fact-base in an expert system. And yes, that will exhibit human-esque "reasoning". And yes, a good argument can be made that this isn't "true" intelligence, and it won't develop true sentience
Imagine the military and educational benefits of such a system. The US military is getting their money's worth, and they know it. Imagine Cyc, with its full fact-base, on a device carried by every soldier. "Cyc, how do I fix this problem on an Apache helicopter?" "Cyc, where is the fuel tank on this specific enemy vehicle?" Can you imagine being an inquisitive child and having one of these things at your disposal? "Cyc, how does this work?" "Cyc what is fourier analysis?"
This sort of system is a really good system for organizing and relating statements and presenting them in such a way extraneous unrelated results can be easily eliminated, and related results can be located quickly. It it can be made to derive statements for its fact-base by reading anything available, then it would become almost like an Oracle of Knowledge. Eventually, with some years of refinement, it may be possible to ask the engine difficult theoretical questions, ("How can we improve on the strength of carbon nanotubes?") to which it would respond with an experimental procedure (as the answer is not immediately clear) to discover more facts toward the solution to the problem...
When you consider this, it doesn't really matter if it has "true" intelligence or not. We don't have to argue the finer points on reasoning, intelligence, etc. No matter what, it will be a system the human intelligence can use to extend its own reasoning, and with that, I think, we will be able to make great bounds forth in education and scientific discoveries because we will be able to relate such broad and deep pools of knowledge.
Wendell
Re:Something else to think about ... (Score:2)
Speaking for myself (which I do often) I would like to be able to dictate something to my computer, tell it to send a copy to Bob and Alice, change all the red squares to blue circles in a range of documents, remove the commercials from this TV show, and call Alice if she's at work, and send her a card at home...
A computer is capable of all these things, sure; I'd like to give orders and have the computer write the code for the script, or task. This would be the truly useful thing for me.
I read about Cyc back when it was just getting started; it would be nice if these kind of everyday applications were usable.
Re:Something else to think about ... (Score:2)
On the Cyc company's website, one of the projects they're working on is implementing a system exactly as the one you described. Current computer software is capable of doing all of those things, but you have to do it all manually, one at a time, and all though separate interfaces.
Using a Cyc-based front end as your interface brings about the ability for your computer to actually understand exactly what you mean when you tell it to do something... it uses its database to remove ambiguities from the orders you give it.
One of my life-long dreams is to have a house (or at least a single computer) that takes orders in the same manner as the Enterprise-D computer and give useful information back in return.
On application in particular that I'm looking forward to: I imagine a future where, if I'm learning a new programming language, I can ask the computer to bring up an short example of syntax for a particular piece of code or display the prototype for this function or that. My children might have a program for studying schoolwork where the computer might prompt them for for answers and tell them if they're wrong. If it guesses that the child doesn't understand a particlar topic, the program would give them a short overview of it and ask questions afterwards about how it ties into other areas of the subject.
*sigh* I can't wait for the future...
Re:Something else to think about ... (Score:4, Interesting)
For instance, they deployed the technology to an image library owned by a news company. The company had lots of images, all with different captions. The thing was, there was no fixed system for the captions, they were just english descriptions (short) of what was in the photo.
So Cyc analysed all the captions, and turned them into CycL (it's own logic language). It then used its rudimentary natural langauge capabilities to figure out equivalents, so if you asked for "frightened child" it would match to "girl with gun held to her head" even though they contained no equivalent words. Pretty clever stuff, though they're a long way from being able to make it formulate sentances itself.
17 year old story!? (Score:4, Funny)
Cyc? (Score:2)
Maybe I'm too much of a theatre tech geek.
Turring test.... (Score:2)
Another use would be to prime a nural net with a set of "known facts" and see how well the net takes off from there.
Just because a tool on it's own isn't particularly userful, doesn't mean that it will not be usefull as a component of some other tool.
-Rusty
A little more info... (Score:2)
Lenat and bogosity; Cyc fictionalized in Galatea? (Score:3, Interesting)
Consulting The Jargon File entries for
bogosity [tuxedo.org] and micro-Lenat [tuxedo.org],
we see that the uLenat is the everyday unit of bogosity,
and that it is named for Doug Lenat, whose project Cyc is.
I tend to agree with Reid, myself.
ob book: For a literary treatment of a connectionist machine
that may or may not resemble Cyc,
see Richard Powers _Galatea_2.2_ [amazon.com]
I saw this in Discover Magazine (Score:2)
Actually, in that article, it had already asked if it was human.
The Discover article was titled "At Last: A Computer as Smart as a Four-Year Old," possibly without the "At Last:" part.
Common Sense Knowledge (Score:3, Insightful)
The interconnectedness of human cognition is so astronomically complex as to be intractable to formal approaches. This realization immediately makes the use of symbolic knowledge representation approaches to creating human-like common sense in a machine look rather silly. That 25 million dollars of taxpayers money went into this Cyc thing is a testament to the effectiveness of the propaganda machine of the GOFAI community. Bravo!
Re:Common Sense Knowledge (Score:2)
Forget the sentimental crap, and concentrate on the core problems you outlined... finding your way around an unfamiliar neighborhood, for instance.
Why can't we simulate this? We could probably even explain to Cyc that this wasn't real, but only training, and that most of the principles would also apply to a human in the real world. Recognizing a face, and even music should not be impossible either. Hell, we might even have it watch football or soccer, and analyze the player... sure, it is only armchair sports, but then that is all most people do themselves.
Direct sensory experience isn't as necessary as you suggest, and maybe by the time we finish preparing the thing for the real world, we'd also be able to give it the body it needs for such a journey.
As for the money spent/wasted... I'm simply not knowledgable enough to know if it is indeed folly or not. But there is a difference between pursuing a dead end course of research, and defrauding the goverment.
Re:Common Sense Knowledge (Score:2, Insightful)
The problem of recognition of smells, faces, music, etc. is nothing more than the problem of classification of objects. Computers are better at recognizing faces than humans. Dogs are better than humans at recognizing scents ( is that intelligence? ). As a critic of AI you are going to have to raise the bar a bit higher than it used to be, as critics of AI did when machines first started playing chess well ( they decided that chess playing ability wasn't such a good test of intelligence after all ). You may as well admit that your definition of intelligence is "that which a machine can't do".
They should let two Cyc's talk to each other (Score:2, Interesting)
Re:They should let two Cyc's talk to each other (Score:2)
Cliff notes for the machine.... (Score:2)
Will Cyc ever become intelligent? Unlikely in my view. However, what if we had a human level AI right now? Without the data Cyc has, it would STILL fail the Turning test, simply because it would not be able to discuss day to day things intelligently.
There's a book called Alternaties. The premise is the standard "multiple timelines", except that the timelines in question diverged about 50 years ago. One timeline has access to the others, and sends agents over to get technologies that were developed in the other timelines.
One agent's cover is blown because all his briefing said about a major cultural event was "A nuclear incident" - the incident in question was a terrorist attack like Sept. 11, only with a nuke. It changed the whole culture, but he didn't know it.
Like that agent, a machine intellect would be at a loss in our world without some basic information - how would a computer that had never seen water know it was wet otherwise? How would it know skinned knees hurt?
The only other solution is the Infocom "A Mind Forever Voyaging" approach - create your AI as an infant, and simulate the real world around it as it "grows up".
Add this to the common sense list. (Score:3, Funny)
This was a lesson bitterly learned by the Warsaw weekly 'FART' back in the early 90's. Fart means stroke of luck in that language, but their luck ran out pretty fast.
Not to mention the marketing team behind the Chevy Nova ['won't go'], Latin American division.
Cyc: survivor of 1980s A.I. mania (Score:3, Insightful)
Although little practical progress was made in A.I., there was some decent spinoffs. The first workstations and first personal graphics computers were from A.I. efforts at Xerox, TI, Symbolics and others. Soon after Apollo, HP, and Sun followed with more generalized workstations using this technology. And then Apple MacIntosh and the Thieves of Redmond.
Richard Stallman was left unmolested in the empties out MIT AI lab to develop his GNU tool family.
Cyc was part of the US government-industry A.I. research institute in Austin. Then it became privatized into its corporation hobbling along on governemnt and private funding.
My first question would definitely be... (Score:2)
A few links (Score:2)
Cyc's corporate page [cyc.com] has links to many recent news articles [cyc.com], the OpenCyc project [opencyc.org], and other stuff of potential interest.
Headhunters Automated (Score:2)
I have encountered human recuiters who want things like 9 years of Java and web development experience.
Not AI, but a way to teach an AI. (Score:2)
Open source code for the Cyc project available (Score:2)
Re:websites (Score:2, Funny)
Interesting, very interesting.
-Captain John Sheridan
Speelin'. (Score:2, Offtopic)
Well.. (Score:2)