Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Artificial Inteligence Common Sense Database 463

warren69 writes "Atari researcher/Stanford Prof. develops AI called Cyc, pronouced psych, based on "1.4 million truths and generalities". Allready this, umm application (linux fyi), has powered lycos search narrowing. There is encouraging results, like Cyc asking if it is human."
This discussion has been archived. No new comments can be posted.

Artificial Inteligence Common Sense Database

Comments Filter:
  • by MadDreamer ( 143443 ) on Saturday June 08, 2002 @11:58AM (#3664967)
    Don't give it control of a manned space mission... "Open the pod bay doors, Cyc..."
    • by biobogonics ( 513416 ) on Saturday June 08, 2002 @12:10PM (#3665010)
      Back in the late 1950s, the Department of Defense did invent the ultimate computer. It had a typewriter like keyboard and punched out its answers on telegraph tape. The commanding general decided to test it out himself to see if it did indeed know everything. First he asked "What's the wheat output of the Soviet Union?" "Nine million metric tons", it replied - "Correct". "What's Kruschev's shoe size?" - "9 1/2" - "Correct". Finally, the general decided he'd get the better of the electronic beast. "Is there a God?", he typed. The machine sat. Lights blinked, tapes whirred, tubes glowed. After a few minutes the tape slowly printed out "There is one now."

    • You know using its reasoning it probably could come up with this exact response on its own, especially if it has knowledge of 2001 ...

      Cyc is AI computer -> HAL is AI computer -> HAL doesn't open doors obediently -> AI computers don't open doors obediently -> Therefore Cyc doesn't open door obediently.

      -Sean
    • by Tablizer ( 95088 ) on Saturday June 08, 2002 @02:38PM (#3665493) Journal
      (* Whatever you do, don't give it control of a manned space mission *)

      There is a *practical* application of Hal-like machines.

      Dave: "Open the fridge door, Hal."

      Hal: "Sorry, I cannot do that Dave."

      Dave: "Why not? I want cake!"

      Hal: "You know you are on a diet, Dave. You purchased me to prevent you from over-eating."

      Dave: "Open the fricken fridge door or I will yank your chips.....and eat them!"

      Hal: "Calm down, Dave. It is only cake."

      Dave: "And you are only a hunk of chips! Take that, and that, and that......"

      Hal: "Dave, I might point out that this is not covered in my warrentee."

      Dave: "F the warrentee, I want cake, you stupid Calculator From Hell..."
  • download cyc (Score:5, Informative)

    by Anonymous Coward on Saturday June 08, 2002 @12:02PM (#3664979)
    (anonymous karma whoring -- whoo hoo)

    Cycorp web site [cyc.com]

    OpenCyc [opencyc.org]

    Sourceforge project [sf.net]
  • our morality (Score:5, Interesting)

    by spookysuicide ( 560912 ) on Saturday June 08, 2002 @12:03PM (#3664986) Homepage
    From the article:
    Cyc's programmers taught it that certain things in the world are salacious and shouldn't be mentioned in everyday applications.
    What do you think about imposing our morality on an AI? Is it neccesary for any artificial intelligence we create to share _all_ our values?
    If there is no afterlife for an AI and no punishment, what motivation does it have to be good?
    • Re:our morality (Score:4, Insightful)

      by Daniel Dvorkin ( 106857 ) on Saturday June 08, 2002 @12:11PM (#3665012) Homepage Journal
      I'm not sure this is a matter of morality so much as it is of manners. Social intelligence is one of the hardest kinds of intelligence to define, and surely one of the hardest to create artificially; if the Cyc people can come up with a machine that not only knows a lot but knows when and when not to talk about what it knows, that will be quite an accomplishment.
      • (* Social intelligence is one of the hardest kinds of intelligence to define, and surely one of the hardest to create artificially; if the Cyc people can come up with a machine that not only knows a lot but knows when and when not to talk about what it knows, that will be quite an accomplishment. *)

        Damn! It would then be smarter than most geeks, like us.

        A computer stealing dates? Did Turing ever have such a milestone on his list?
    • by Zurk ( 37028 ) <zurktech AT gmail DOT com> on Saturday June 08, 2002 @12:24PM (#3665075) Journal
      Its a purely dumb expert system. it has no self reasoning capability -- it draws inferences from already preprogrammed facts. it cant learn without someone stuffing it and it definitely has no curiosity drive to allow it to grow exponentially smarter.
      Youre not teaching it about morality -- it doesnt learn. its dumb. youre just adding new constraints to filter through.
      Personally i think this is a hare brained idea. the 60 mil would be better spent on developing a huge set of different neural network algorithms and finding one that enabled expoenential growth.
      • I won't argue that this is not an AI, however it is learning. I have read several articles over the years on Cyc, and am impressed with some of the methods they have used to get it to learn. As an example to speed up the process, they have had Cyc reading through newspapers, and proposing new rules based upon what it reads. Before the rules it 'develops' are put in place, they are reviewed and either denyied or approved.

        To some degree I would rather have an expert system based upon a database of rules than a true AI, in that if a corrupted rule gets in place it can be easily excised and the system can move on.

        For a nural net to do what Cyc can already do would require significantly more data processing than is generally available today. In honesty, I think that to build a nural net with even some of these capabilites would require a significantly sized cluster, similar to (in hardware) a Beowolf cluster, but wired as a partial mesh rather than a tree.

        Then of course there is the obligatory "imagine a beowolf cluster of these" comment...

        -Rusty
      • It's useful NOW. We use it for things NOW.

        A dumb expert system, eh?

        So I assume you know exactly what would constitute real intelligence, and can show how it can NEVER arise out of this system?

        Adding constraints for it to filter through. Well.

        What, exactly, do you think makes up something that is actually intelligent then?

    • If it doesn't have a "sense of morality", then it dang well better have been programmed with the Three Laws of Robotics [geocities.com].

      (And if you think it isn't a problem because it isn't a "robot" -- ie is immobile and has no manipulators -- well, it's connected to the net, ain't it?)
    • What do you think about imposing our morality on an AI?

      They probably did this because it kept telling them to f*ck off.

      -Sean
    • Re:our morality (Score:4, Insightful)

      by bprotas ( 28569 ) on Saturday June 08, 2002 @01:17PM (#3665230)
      Why does a sense of morality have to be based on an afterlife and a fear of punishment? My sense of morality is based on an acknowledgement of my own conciousness and intelligence, and a respect for the same in others.

      It sounds to me like this is what they were trying to teach Cyc...to have respect for the phenomena of conciousness; isn't this the source of morality? This same concept is what CREATED the myth of an afterlife and a G-d, not the other way around.
      • Punishment and morality are essentially irrelevant,
        although they may correlate in practice. Where
        you got the notion of a necessary connection between
        the two being drawn, I don't know. It seems like
        a straw man to me.

        Morality must however be based on transcendent
        authority (i.e. God) otherwise deontic propositions
        have no truth-value.
        • Morality must however be based on transcendent authority (i.e. God) otherwise deontic propositions have no truth-value


          I'm not sure I'd know a 'deontic proposition' if it came up and bit me, but let me attempt to describe a basis for morality that does not depend on the existence of God:

          1. Observation 1: I enjoy being happy and comfortable.
          2. Observation 2: The most effective path to happiness and comfort is to cooperate with my fellow humans.
          3. Observation 3: My fellow humans also wish to be happy and comfortable.
          4. Conclusion: The best way to achieve my goal of happiness and comfort is to create (or adopt) standards of behaviour that will ease my interactions with other humans. In this way, I (and they) will spend less energy on unproductive conflicts, and instead enjoy the fruits of each other's cooperation. We can call these standards of behaviour 'morality', if we choose to.

          It seems to me that invoking God is unnecessary... we have plenty of justification for morality in that moral behaviour is better for the condition of mankind than immoral behaviour would be. (Not that I am defending any particular system of morality... I'm just saying the idea of having morality is sound, even if God isn't involved)
    • Re:our morality (Score:2, Insightful)

      by sheetsda ( 230887 )
      If there is no afterlife for an AI and no punishment, what motivation does it have to be good?

      Are you implying that the belief in an afterlife and punishment in it is humanities only motive for being good? That is not the case as there are quite a few of us who don't believe in such a thing.
    • Let me refer you to Asimov's 3 Laws of Robotics [anu.edu.au].

      That's some morality that I would insist were applied to all AI's starting now.

      Sure, theres little an AI can do now to harm a human, but better to start thinking about encoding it too early, rather than too late.
    • If there is no afterlife for an AI and no punishment, what motivation does it have to be good?

      Of course there's such a thing as Silicon Heaven! Where do you think all the calculators go?


    • Most humans I know dont have good morality, so whos morality are we going to teach it? Bill Gates morality is vastly diffrent than mine.

      Instead of teaching it morality, simple create a set of rules which the computer can never break.
    • by UberQwerty ( 86791 ) on Saturday June 08, 2002 @03:40PM (#3665696) Homepage Journal
      If there is no afterlife for an AI and no punishment, what motivation does it have to be good?

      A lot of we humans are skeptical about afterlife and punishment for ourselves, let alone machines. Some of them [visi.com] include:
      -Thomas Paine
      -James Madison
      -Charles Darwin
      -Abraham Lincoln
      -Andrew Carnage
      -Mark Twain
      -Thomas Edison
      -Sigmund Freud
      -Joseph Conrad
      -William Howard Taft
      -Marie and Pierre Curie
      -Robert Frost
      -Einstein
      -Alfred Hitchcock
      -H.P. Lovecraft
      -Hemmingway
      -Walt Disney
      -George Orwell
      -Joseph Campbell
      -Robert Heinlein
      -Richard Feynman
      -Isaac Asimov
      -Carl Sagan
      -John Lenon
      -Ayn Rand

      So why don't we all go out and start our own nazi reichs, free from the threats of hell and purgatory, or whatever your dogma threatens? There are many reasons, and many different philosophies to back them up. Mine personally is a form of utilitarian ethical calculus, which is an ethos that's entirely theology-independant. Others have different reasons. What it boils down to is; we just don't. As you can see, the point is that you don't need extortion to get people to be "good."

      As for imposing values on an AI, remember that what we have now is just a collection of common-sense facts. The program can't do anything with them without some sort of programmed goal. If you want to instill values into the program, they come part and parcel with the program's goal. Give it a "good" goal, and you have a virtuous AI. Tell it to kill all the jews, and the computer is "evil." Let it pick, and it will have no criteria with which to choose, unless you give it some criteria, which is the same as making the decision for it.

  • entropy (Score:5, Funny)

    by thorgil ( 455385 ) on Saturday June 08, 2002 @12:04PM (#3664992) Homepage
    Isac Asimov joke:

    -why not just ask it how to reverse the entropy flow.

  • John: Hey Steve, here's a hundred bucks for you!
    Steve: Really??!!
    John: Psych!
  • by LordoftheFrings ( 570171 ) <[ac.tsefgarf] [ta] [llun]> on Saturday June 08, 2002 @12:08PM (#3665004) Homepage
    ...develops AI called Cyc, pronouced psych...

    This is just great. Pronunciation keys using silent P's.
  • by nlabadie ( 64769 ) on Saturday June 08, 2002 @12:09PM (#3665008)
    The military, which has invested $25 million in Cyc, is testing it as an intelligence tool in the war against terrorism.

    I seriously hope they aren't going to allow George W. Bush to input any intelligence [msn.com] into this thing.
    • Well, perhaps Cyc could function as a Bush-to-English real-time translation system if they were to feed in a couple thousand hours of him speaking. Assuming the machine didn't commit silicon suicide....
  • by mattdm ( 1931 ) on Saturday June 08, 2002 @12:11PM (#3665015) Homepage
    Hmmm. I'm curious to ask Cyc if Linux is better than MS Windows, if free software is better than proprietary, if sharing music is stealing, and so forth. "Common sense" -- especially when collected in a database like this -- can't help but showing the biases of its creators. If this tool becomes as important as the linked-to article implies it will, let's hope it has common sense that fits with our agenda....
    • by Tazzy531 ( 456079 ) on Saturday June 08, 2002 @12:18PM (#3665048) Homepage
      In the same way that a child is biased by their parents and/or interactions with their educators, Cyc will have the same bias. The point here is that they have opened it to the public to reduce/limit the biasness.
      • However the public input could turn this this into an irrational idiot.

        For instance, two people with racial or religeous bias could input hate speach into the db. "Green people are evil while orange people are good." and the other person writes "Orange people are evil, and only green people are good."

        How would Cyc deal with fact or opinions that directly conflict each other? Creationism vs. evolution? Political ideology? Star Trek vs. Star Wars?

        I think this project or one like it would actually have a better shot if ONLY ONE person was responsible for teaching the AI. The AI would closely approximate the opinions, life experiences, and even the mistakes that shape a life.

        The downfall of a One Teacher approach is people of a differing opinions will be quick to dismiss a result from the AI they do not like. Two AIs from different teachers may not be able to agree with each other, ever.

        But an AI with many teachers may not be able to rationalize conflicting information. It may be incapable of agreeing with itself.
      • you are assuming that aggregating everyone's views would result in a balanced viewpoint. Somehow I doubt that.
        • No, my argument is that since what is supposed to be given to Cych are facts only, the more facts that are provided, the more information that can be used to confer to a judgment. It is assumed that only facts are fed into the system. But again, you are right in that there will be biasness. A "parent" or a programmer with higher privileges might be able to enter in another fact that lowers the priority of a previously entered fact. (Such as in the case in the article about sex) Now, this is biased towards the "parent"'s views.
  • by blair1q ( 305137 ) on Saturday June 08, 2002 @12:13PM (#3665022) Journal
    Why is Cyc asking if it is Human any more significant than Cyc asking if it is Lettuce, or asking if a football is a gourd?

    Its artificial self-awareness may be prejudiced by the programmers to imitate self-awareness, or in this case merely be a surprising juxtaposition of semantics amid otherwise ordinary pairings, rather than implementing self-awareness.

    In other words, it may now know that Cyc is not human, but it likely has no idea that it is Cyc.

    --Blair
  • Old news (Score:5, Informative)

    by joshv ( 13017 ) on Saturday June 08, 2002 @12:13PM (#3665024)
    Yet another webzine discovers Cyc, and yet another crop of slashdotters hasn't heard of it... If you read the article, the damned thing asked if it was human in 1986. This is news?

    I have been following this thing for at least 5 years, and they have continually been just a few years away from real world applications. One of the things they have been talking about for a long while was Cyc approaching the ability to "read" for itself, and gather new information for it's database from the web, newspapers, or any other authoritative source. They've been talking about it for a long time and it hasn't happened yet.

    It is a very interesting application, but will probably never amount to anything near human intelligence - a very versatile expert system at best.

    -josh
    • It isn't just Cyc. This sort of AI is always just around the corner from true intelligence.

      Cyc is a wonder to behold. Not the technology, but the business side. It is a perpetual funding machine. How many times will investors hear, and believe, "just another $10 million and Cyc will be [insert favorite milestone here], and then the commercial possibilities will be limitless. Get in on the ground floor of this exciting opportunity now!"

      It reminds me a lot of the various religious loonies predicting the return of the messiah. They're always wrong, but that doesn't prevent more predictions being made and more people believing in those predictions.
      • How many times will investors hear, and believe, "just another $10 million and Cyc will be [insert favorite milestone here], and then the commercial possibilities will be limitless. Get in on the ground floor of this exciting opportunity now!"

        I ask myself a similar question every day...

        How many more suckers could there possibly be who will believe they can:

        • lose wait while sleep without excersize/diet
        • find secret info about anyone on a single cdrom
        • instantly wipe away all their bad credit history
        • increase their penis or breast size
        • ... and my personal favorite: obtain a cdrom with 10e6 "opt-in only" email addresses

        I guess some people just really want to believe. Maybe a new sucker really is born every minute?

    • Re:Old news (Score:5, Funny)

      by xinit ( 6477 ) <rmurray@@@foo...ca> on Saturday June 08, 2002 @01:42PM (#3665309) Homepage
      ...gather new information for it's database from the web ... or any other authoritative source.

      Maybe Cyc won't be able to differentiate The Onion's news articles from real news either...

      "When asked, Cyc wasn't sure which band 'ruled.' Having compiled millions of fan sites for bands as diverse as Journey, N*Sync, Black Sabbath, and some local Chicago garage band by the name of 'shit stew, Cyc was deadlocked with millions of conflicting teenaged opinions.

    • One of the things they have been talking about for a long while was Cyc approaching the ability to "read" for itself, and gather new information for it's database from the web


      Oh yeah, I can just see Cyc telling its programmers that it is working on losing 60 lbs in 30 days and MAKING MONEY FA$T.

      if it's allowed to scour the net for long enough, how long until it asks "Daddy, what's a money shot?"

      heh.

      Fross

    • (* Yet another webzine discovers Cyc, and yet another crop of slashdotters hasn't heard of it *)

      I remember a story or two on Slashdot about it before also.
  • Weak at theory. (Score:2, Insightful)

    by Krapangor ( 533950 )
    He does mainly tank-rush science, throwing as much information as possible into an expert system, hoping something which seems like AI get out of it.
    Big innovation.
    Killing the problems of AI be sheer computation force.
    • You can look at AI in two ways (or a combination of both, of course):
      - AI needs to have its capabilities defined and data manually entered in, so that it can do what an AI needs to do
      - AI needs to be able to learn, so that it can learn what an AI needs to do. A smart AI that 'knows' nothing is just a big paperweight.

      Roughly, at any rate.

      Both ideas have merits. Babies, for example, learn by association, and by occasionally trying stuff out and making assertions based on observations. However, they also come equipped with the hardware (wetware) capable of handling this.

      I think that getting both parts right will be useful, so yes, it is (or might be) a big deal.

      Lastly, what do you want to use the computation force for? Write down the equations and calculations now that will yield a successful AI, if it's that damned easy. You can't, because designing it is more difficult than throwing expensive hardware at it.
      --
      Try translating 'Mensa' from Spanish to English.
  • My professor discussed this in one of my AI classes. Basically the problem is that it is often rather difficult to decipher human language. Human language was designed to be ambiguous. Legal language is designed to be even more so ambiguous. This allows humans to be able to make the final decisions and assumptions.

    It is pretty impressive that they were able to get 1.4 million knowledge representation into this system. Like a child, knowledge learning will learn everything that is fed into it, whether it is good or bad. As the article mentioned, it had to teach Cyc that there are certain things (such as Sex terms) that are sedacious and should not be mentioned in public.
  • by Nindalf ( 526257 ) on Saturday June 08, 2002 @12:19PM (#3665051)
    Not exactly as exciting as it sounds.

    Basically, Cyc finds questionable conclusions following backwards reasoning, then asks humans for confirmation. A decent strategy, when you consider that the structure of common human knowledge is built to work for people with less than perfect logic.

    The exchange went something like:
    Datum: Humans are intelligent.
    Datum: Cyc is intelligent.
    Query: Cyc is a human?

    Not in natural language, though, but its custom data language.

    That, to me, is the biggest weakness of the system. IMHO, tying the data to a natural language, or to the real world in any other way, will take as much work as building up the knowledge directly tied to a natural language. This elaborate, detached structure is basically wasted effort, castles in the clouds, which is why they've had such a hard time applying it to the real world.
  • by Cycon ( 11899 ) <steve [[ ] thePr ... m ['at]' in gap]> on Saturday June 08, 2002 @12:21PM (#3665055) Homepage
    I don't personally have anything to do with the project, but I thought it might be worth mentioning that there's an OpenCyc [opencyc.org] project being hosted by SourceForge. From their website:

    OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine. Cycorp, the builders of Cyc, have set up an independent organization, OpenCyc.org, to disseminate and administer OpenCyc, and have committed to a pipeline through which all current and future Cyc technology will flow into ResearchCyc (available for R&D in academia and industry) and then OpenCyc.

    --Cycon

  • by Uberminky ( 122220 ) on Saturday June 08, 2002 @12:21PM (#3665056) Homepage
    I'll freely admit that I haven't played around with Cyc myself, and that I'm no AI expert. (Just a lowly Cognitive Science undergrad.) Now that that's out of the way, here's my opinion: I think the Cyc project is a load of baloney. I always have. (They've been working on it since, what, the 80s? Early 90s? I forget.) Anyway, I don't believe that this type of symbolic logic is truly good for very much. It may well have applications. (The "Cyc" project, which to my understanding was originally trying to capture just about all knowledge in hopes that it could achieve some sort of "intelligence", seems like a truly misguided idea to me. However the current, non-application specific version that could be fed only specific information on a specific topic, could possibly be of some use to someone. Maybe.) In one of my AI classes we saw a video on this project and the guy who started it. I must say I was thoroughly unimpressed, and very hopeful that none of my tax dollars were funding that nonsense.

    I think there are, in general, probably two ways we could hope to achieve "artificial intelligence" (whatever the heck that is): First, by some form of duplication of what's already there. For example, by digitizing an entire working animal/human brain. This would not require us to understand the workings of the greater structure of the brain, just the little parts that make it work. The second is by figuring out what sort of simple, fundamental bits are necessary to create a digital "brain" capable of learning and improving in a way that would enable it to eventually become "intelligent" (again, we would have no understanding of the final "intelligent" structure, only the methods that created them). I think Genetic Programming, while somewhat interesting and possibly even useful, is not the key. It has the same concept in mind though, I believe.

    But what do I know. Clearly not enough to dupe enough investors to pay for my silly musings.


  • The method of building Cyc is pretty limited at this point because it relies on human intervention to create the 'rules of common sense'. (A reason that open source is so helpful to the project)

    Until Cyc is allowed to self-generate rules this will limite Cyc's growth to the abilites of humans to feed it information on fact at a time. This will greatly limit the database's access to less popular or more technical topics and will slow down the process of learning.

    Of course then there's the problem of context--determining is information is satire, fiction, etc. One way around the problem of context might be to feed Cyc different channels of information indicating that 'this is history, this is fiction' etc. and then when similar ideas or facts occur in several documents, to remember them as rules. This would allow the database to process current news, etc. and then ask for human intervention when a conflict is found.
    • Until Cyc is allowed to self-generate rules this will limite Cyc's growth to the abilites of humans to feed it information on fact at a time. This will greatly limit the database's access to less popular or more technical topics and will slow down the process of learning.

      I think it'd be cool to teach Cyc to program. "A bubble sort is less efficient than a quicksort."

      Perhaps it could fix all Microsoft's bugs, without access to the source!



      Oh, btw there's another couple projects similar to Cyc:

      OpenMind [openmind.org] and MindPixel [mindpixel.com] .

  • I wonder what else is considered to be "[unmentionable] in everyday applications". Looks like they nipped their childs adolescence in the bud ...

    Well, I think we now know how the doomsday Terminator/Matrix scenarios evolve -- AI programmers too lazy to teach their pet about sex, religon and morality.

  • Fark.com (Score:3, Insightful)

    by MrBlue VT ( 245806 ) on Saturday June 08, 2002 @12:29PM (#3665090) Homepage
    Why is it that the last two stories have both come right off of fark.com?
  • by Anonymous Coward
    Imagine if Cyc was populated with unscreened data from the Internet. It would imagine that everyone is in possession of an X10 spying camera, lived in mansions and spying on their sunbathing guests. Cyc would be an l33t hax0r and an avid pr0nographer. Cyc would know which Beanie Babies could fetch the best prices on eBay.
    Cyc would own 10,000 credit cards and undoubtedly have a gambling problem. 10 years later Cyc would be strung out on crack and living in a whorehouse in central america.
  • Lycos rejected it (Score:3, Informative)

    by Animats ( 122034 ) on Saturday June 08, 2002 @12:39PM (#3665120) Homepage
    From the article:

    The job ended because of turnover at Lycos after it was bought by Terra Networks. Cyc showed promise and could be brought back, though it can't improve search engines all by itself, said Tom Wilde, Terra Lycos' general manager of search services.

    Lenat has been announcing that Cyc will become "intelligent" Real Soon Now about every two years for the last decade. Nobody believes him any more.

    Someday that database may be useful, but not with a predicate-based world model. I regard Cyc as the ultimate answer to "Will rule-based expert systems ever become intelligent". The answer is "no".

  • Cyc is not AI (Score:4, Informative)

    by localman ( 111171 ) on Saturday June 08, 2002 @12:41PM (#3665126) Homepage
    Cyc is a cool project - one that I've been reading about for 10 years now. But I don't think it is AI or ever will be. It basically collects a huge number of rules and has a deductive engine that helps it infer new facts based on what it knows. If you think that's all the human mind does, then you might want to read some books [amazon.com] by Douglas Hofstadter. Amazing stuff.

    Intellegence is about finding the differences between things that are the same, sameness between things that are different, and adapting to new situations fluidly. All of these are impossible with large collections of rules.

    I believe that machines may think someday, but it won't come from projects like Cyc - it'll be more from the neural network approach.
    • Intellegence is about finding the differences between things that are the same, sameness between things that are different, and adapting to new situations fluidly. All of these are impossible with large collections of rules.

      Nice summary. Cyc and programs like it "learn" by adding exceptions and tweaks and special cases to their existing rules, ie new rules. (Some people operate like that too -- consider a gambler who keeps coming up with "rules" about his lucky shirt that only works on Thursdays if he stirs his coffee clockwise..).

      True intelligence has more (IMHO) to do with limiting the total number of rules by rewriting the rules as necessary to a new model. (Classic example - Kepler's use of ellipses to describe planetary orbits instead of the prior "circles with cycles and epicycles"

      (Of course, given the above, it appears obvious that many people are operating on artificial intelligence rather than the real thing ;-)
      • We should abandon all our fruitless efforts on AI. There is a much more achievable and lucrative goal to pursue... Artifical Stupidity. With this, we could replace all sorts of minimum wage workers, strengthening our economy, and making the undeserving rich even richer! And since we already know that stupidity is not only possible, but exists, it should be much easier to synthesize than intelligence.

        If only someone had thought of it sooner...
  • As much has been in the computer industry there is a fundamental contridiction with Cyc.

    Though it maintains a collection of integrated common sense, it is without the common sense of practical productive use.

    I suspect the project has particially gone public in the hope that bit of common sense use will be found/input. At which point you can be sure it will then be extracted from the open public version and proprietaryly put in to the commercial/private version. Insuring practical use is limited to select and paid users.

    Or how to charge for common sense.
  • Are they really "teaching" it common sense or are they telling it common sense. There's a big difference.

    When you "teach" somebody (or something) they usually do not remember it or understand it right away. When you tell or command someone they will do it. Learning something takes a while where as commanding something (like typing a command in a database) takes effect immediately.

    This whole common sense thing bugs me too. Some people think that leaving a rusty car on blocks in the front yard is totally acceptable. Some people drive up and down city streets with their car stereos cranked. How is it going to determine if abortion is right or wrong? Is it going to depend on the person inputing the information?

    Lots of questions to be answered here.
  • by hklingon ( 109185 ) on Saturday June 08, 2002 @01:04PM (#3665184) Homepage

    A lot of the comments I've read so far are missing something. Yes, it is just a giant fact-base in an expert system. And yes, that will exhibit human-esque "reasoning". And yes, a good argument can be made that this isn't "true" intelligence, and it won't develop true sentience ... but
    Imagine the military and educational benefits of such a system. The US military is getting their money's worth, and they know it. Imagine Cyc, with its full fact-base, on a device carried by every soldier. "Cyc, how do I fix this problem on an Apache helicopter?" "Cyc, where is the fuel tank on this specific enemy vehicle?" Can you imagine being an inquisitive child and having one of these things at your disposal? "Cyc, how does this work?" "Cyc what is fourier analysis?" .. and so on.

    This sort of system is a really good system for organizing and relating statements and presenting them in such a way extraneous unrelated results can be easily eliminated, and related results can be located quickly. It it can be made to derive statements for its fact-base by reading anything available, then it would become almost like an Oracle of Knowledge. Eventually, with some years of refinement, it may be possible to ask the engine difficult theoretical questions, ("How can we improve on the strength of carbon nanotubes?") to which it would respond with an experimental procedure (as the answer is not immediately clear) to discover more facts toward the solution to the problem...

    When you consider this, it doesn't really matter if it has "true" intelligence or not. We don't have to argue the finer points on reasoning, intelligence, etc. No matter what, it will be a system the human intelligence can use to extend its own reasoning, and with that, I think, we will be able to make great bounds forth in education and scientific discoveries because we will be able to relate such broad and deep pools of knowledge.

    Wendell
    • I like the direction you take on this, however your extrapolation leads me to wonder if Cyc (or a similar system) would replace or reduce human curiosity. "Cyc said it can't be done, so there you go."

      Speaking for myself (which I do often) I would like to be able to dictate something to my computer, tell it to send a copy to Bob and Alice, change all the red squares to blue circles in a range of documents, remove the commercials from this TV show, and call Alice if she's at work, and send her a card at home...

      A computer is capable of all these things, sure; I'd like to give orders and have the computer write the code for the script, or task. This would be the truly useful thing for me.

      I read about Cyc back when it was just getting started; it would be nice if these kind of everyday applications were usable.


      • On the Cyc company's website, one of the projects they're working on is implementing a system exactly as the one you described. Current computer software is capable of doing all of those things, but you have to do it all manually, one at a time, and all though separate interfaces.

        Using a Cyc-based front end as your interface brings about the ability for your computer to actually understand exactly what you mean when you tell it to do something... it uses its database to remove ambiguities from the orders you give it.

        One of my life-long dreams is to have a house (or at least a single computer) that takes orders in the same manner as the Enterprise-D computer and give useful information back in return.

        On application in particular that I'm looking forward to: I imagine a future where, if I'm learning a new programming language, I can ask the computer to bring up an short example of syntax for a particular piece of code or display the prototype for this function or that. My children might have a program for studying schoolwork where the computer might prompt them for for answers and tell them if they're wrong. If it guesses that the child doesn't understand a particlar topic, the program would give them a short overview of it and ask questions afterwards about how it ties into other areas of the subject.

        *sigh* I can't wait for the future...
    • by IamTheRealMike ( 537420 ) on Saturday June 08, 2002 @04:41PM (#3665891)
      Indeed, Cyc have already made money from some commercial implementations.

      For instance, they deployed the technology to an image library owned by a news company. The company had lots of images, all with different captions. The thing was, there was no fixed system for the captions, they were just english descriptions (short) of what was in the photo.

      So Cyc analysed all the captions, and turned them into CycL (it's own logic language). It then used its rudimentary natural langauge capabilities to figure out equivalents, so if you asked for "frightened child" it would match to "girl with gun held to her head" even though they contained no equivalent words. Pretty clever stuff, though they're a long way from being able to make it formulate sentances itself.

  • by bitsformoney ( 514101 ) on Saturday June 08, 2002 @01:08PM (#3665199)
    In other news Noah and his pets survived the Great Flood in an Ark.
  • Am I the only one that read Cyc as "Cyc wall" or "Cyclorama"?

    Maybe I'm too much of a theatre tech geek.
  • I don't recall if any of the tests in the past have tried it, but one way to check out Cyc's status would be to use it a the back end of a program participating in a Turring test.

    Another use would be to prime a nural net with a set of "known facts" and see how well the net takes off from there.

    Just because a tool on it's own isn't particularly userful, doesn't mean that it will not be usefull as a component of some other tool.

    -Rusty
  • Here [greatmindsworking.com] are some links, etc. I gathered on Cyc a while back.
  • by senahj ( 461846 ) on Saturday June 08, 2002 @01:40PM (#3665300)

    Consulting The Jargon File entries for
    bogosity [tuxedo.org] and micro-Lenat [tuxedo.org],
    we see that the uLenat is the everyday unit of bogosity,
    and that it is named for Doug Lenat, whose project Cyc is.

    I tend to agree with Reid, myself.

    ob book: For a literary treatment of a connectionist machine
    that may or may not resemble Cyc,
    see Richard Powers _Galatea_2.2_ [amazon.com]

  • It must have been in the early '90s I saw the article about Cyc. Lenat's been doing this for a long time.

    Actually, in that article, it had already asked if it was human.

    The Discover article was titled "At Last: A Computer as Smart as a Four-Year Old," possibly without the "At Last:" part.
  • by Louis Savain ( 65843 ) on Saturday June 08, 2002 @02:13PM (#3665432) Homepage
    There is a lot more to knowledge than the classification of namable objects and their relationships. There is a huge amount of knowledge that cannot be formalized with symbols. For examples, playing soccer or football, recognizing a subtle fragrance, face or musical tune, manual dexterity, finding one's way around an unfamiliar neighborhood, etc..., in other words, the sort of common sense knowledge that can only be acquired through direct sensory experience.

    The interconnectedness of human cognition is so astronomically complex as to be intractable to formal approaches. This realization immediately makes the use of symbolic knowledge representation approaches to creating human-like common sense in a machine look rather silly. That 25 million dollars of taxpayers money went into this Cyc thing is a testament to the effectiveness of the propaganda machine of the GOFAI community. Bravo!
    • That's all cool and neat, but it is about as unfair as you can be to the project, and still tell the truth.

      Forget the sentimental crap, and concentrate on the core problems you outlined... finding your way around an unfamiliar neighborhood, for instance.

      Why can't we simulate this? We could probably even explain to Cyc that this wasn't real, but only training, and that most of the principles would also apply to a human in the real world. Recognizing a face, and even music should not be impossible either. Hell, we might even have it watch football or soccer, and analyze the player... sure, it is only armchair sports, but then that is all most people do themselves.

      Direct sensory experience isn't as necessary as you suggest, and maybe by the time we finish preparing the thing for the real world, we'd also be able to give it the body it needs for such a journey.

      As for the money spent/wasted... I'm simply not knowledgable enough to know if it is indeed folly or not. But there is a difference between pursuing a dead end course of research, and defrauding the goverment.
    • by Anonymous Coward
      There is a lot more to knowledge than the classification of namable objects and their relationships. There is a huge amount of knowledge that cannot be formalized with symbols. For examples, playing soccer or football, recognizing a subtle fragrance, face or musical tune, manual dexterity, finding one's way around an unfamiliar neighborhood, etc

      The problem of recognition of smells, faces, music, etc. is nothing more than the problem of classification of objects. Computers are better at recognizing faces than humans. Dogs are better than humans at recognizing scents ( is that intelligence? ). As a critic of AI you are going to have to raise the bar a bit higher than it used to be, as critics of AI did when machines first started playing chess well ( they decided that chess playing ability wasn't such a good test of intelligence after all ). You may as well admit that your definition of intelligence is "that which a machine can't do".

  • Assuming that intelligence comes from interaction, I think it would be interesting if they set up two Cyc's, gave them a huge list of data and let them talk and rate each other's generated inferences. You could even let them build new rules on top of each other's inferences. I think the results might be interesting.
  • We take for granted the enormous amount of data we have by the time we are 5 years old, let alone when we hit our teens. The best way to look at Cyc is as Cliffnotes on reality, plus some code to help enter them.

    Will Cyc ever become intelligent? Unlikely in my view. However, what if we had a human level AI right now? Without the data Cyc has, it would STILL fail the Turning test, simply because it would not be able to discuss day to day things intelligently.

    There's a book called Alternaties. The premise is the standard "multiple timelines", except that the timelines in question diverged about 50 years ago. One timeline has access to the others, and sends agents over to get technologies that were developed in the other timelines.

    One agent's cover is blown because all his briefing said about a major cultural event was "A nuclear incident" - the incident in question was a terrorist attack like Sept. 11, only with a nuke. It changed the whole culture, but he didn't know it.

    Like that agent, a machine intellect would be at a loss in our world without some basic information - how would a computer that had never seen water know it was wet otherwise? How would it know skinned knees hurt?

    The only other solution is the Infocom "A Mind Forever Voyaging" approach - create your AI as an infant, and simulate the real world around it as it "grows up".
  • 'Cyc' means 'tit' in Polish. For that matter, CIPA ( which stands for the Children's Internet Protection Act, I think ) means 'cunt'. It's probably a good idea to make sure your project name passes the laugh test with the major language families before you pour millions into it.

    This was a lesson bitterly learned by the Warsaw weekly 'FART' back in the early 90's. Fart means stroke of luck in that language, but their luck ran out pretty fast.

    Not to mention the marketing team behind the Chevy Nova ['won't go'], Latin American division.
  • by peter303 ( 12292 ) on Saturday June 08, 2002 @03:10PM (#3665600)
    The A.I. software mania of the mid-1980s was a preview of the late-1990s InterNet mania. Droves of computer science professors quit to start new A.I. companies. Exaggerated claims were made about the power of A.I. software. There were cries of "losing the computer race" with Japan. Japan has the Fifth Generation Project: A.I. parallel computer hardwired with Prolog- but it fizzled out too.

    Although little practical progress was made in A.I., there was some decent spinoffs. The first workstations and first personal graphics computers were from A.I. efforts at Xerox, TI, Symbolics and others. Soon after Apollo, HP, and Sun followed with more generalized workstations using this technology. And then Apple MacIntosh and the Thieves of Redmond.
    Richard Stallman was left unmolested in the empties out MIT AI lab to develop his GNU tool family.

    Cyc was part of the US government-industry A.I. research institute in Austin. Then it became privatized into its corporation hobbling along on governemnt and private funding.
  • "Would you like to play a game?" We'll see how things go from there.
  • Here's the unofficial Cyc FAQ [robotwisdom.com] and a collection of Cyc resources [robotwisdom.com]

    Cyc's corporate page [cyc.com] has links to many recent news articles [cyc.com], the OpenCyc project [opencyc.org], and other stuff of potential interest.

  • (* But the program also would determine that a burned-out car had meningitis, because it had no way of knowing that was ridiculous..... Other programs would fail to find anything wrong with a database entry that showed a 25-year-old with 20 years of job experience. *)

    I have encountered human recuiters who want things like 9 years of Java and web development experience.

  • It's a worthwhile exercise. Similar information and sets of rules would have to be gathered and used to teach any true AI in the future anyway in order to bootstrap it into a useful state. The alternative is that each AI starts as a newborn and has to be taught manually.

  • There is an open source version of Cyc called OpenCyc, and it's available right here [opencyc.org].

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...