Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Open Source Stats Science

IQ Test Pegs ConceptNet 4 AI About As Smart As a 4-Year-Old 121

An anonymous reader writes "Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is. Turns out–it's about as smart as the average 4-year-old. The team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Wechsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children. They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine's scores were very uneven across different portions of the test." If you'd like to play with the AI system described here, take note of the ConceptNet API documentation, and this Ubuntu-centric installation guide.
This discussion has been archived. No new comments can be posted.

IQ Test Pegs ConceptNet 4 AI About As Smart As a 4-Year-Old

Comments Filter:
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 18, 2013 @09:13AM (#44317231)
    Comment removed based on user account deletion
  • by Anonymous Coward on Thursday July 18, 2013 @09:16AM (#44317261)

    No!

  • by Sparticus789 ( 2625955 ) on Thursday July 18, 2013 @09:19AM (#44317289) Journal

    Does the AI use contractions?

    • I never really understood Data's issue with using contractions. There are pretty well defined rules for them and rules are something that can be pretty easily programmed so he really shouldn't have had an issue with them.
      • He does use them in the series finale, in the alternate future timeline. Really, I think it was more a plot device to make sure the crew could distinguish him from Lore, and to be brought up on occasion. Like the episode where Riker is taken hostage by an alien on the surface that wants him to be his Dad, and during a hallucination Data uses a contraction, which served as the final piece of evidence that the events going on were not real.

        • He should have been able to whistle too.
          • The whistling thing I could understand, since he presumably had no respiration.

          • by Anonymous Coward

            In the TNG episode Birthright Part 1, Dr Bashir remarks how Data is "breathing" ("..Yes. I do have a functional respiration system....to maintain the thermal control of my internal systems.." and also has a "pulse" ("..My circulatory system not only produces bio-chemical lubricants, but regulates micro-hydraulic power..").

            I don't think they explained why we was unable to whistle (in tune), my guess is it was part of the story arc about his quest to become more human.

      • by RobinH ( 124750 )
        The guy who built him deliberately made him *less* lifelike than Lore because the colonists didn't like how eerily human-like Lore was. His inability to use contractions was one of these things.
        • Thank you. That makes more sense and I remember the episode where that's explained now. I guess it's time to hand in my TNG card.
          • Re: (Score:2, Funny)

            by Anonymous Coward

            Not that it matters, but your XO emoticon means "hug and kiss" in straight culture.

      • I never really understood Data's issue with using contractions.

        I thought it was a deliberate inability programmed in by his creator to make Data less like Lor and to make him feel less threatening to the other colonists..

      • Alexander Hamilton never used the word "whilst" but James Madison never used the word "while." [upenn.edu] This does not imply that either founding father was an ill-programmed android.
        • by Anonymous Coward

          No, no, you're definitely on to something there.

    • No, but at least it doesn't hang when you ask it the value of pi.

      • Re:But (Score:5, Funny)

        by leonardluen ( 211265 ) on Thursday July 18, 2013 @09:30AM (#44317413)

        for a 4 year old i am pretty sure the value of pi is "more" and possibly "with ice cream"

        • Re: (Score:3, Funny)

          by Laxori666 ( 748529 )
          No you see, by pi he meant the mathematical number - 3.1415... - not pie as in "food". Even so, the value of pie wouldn't be "more" and "with ice cream" - that's not the value of something, those are desires and descriptors of something. Value would be more like how many dollars it's worth or how many good deeds/grades one has to get to receive the pie.
          • Re:But (Score:5, Insightful)

            by flyingfsck ( 986395 ) on Thursday July 18, 2013 @10:27AM (#44318033)
            I'm wondering whether your post deserves a whoosh, or a huh? or whether you are a 4 year old AI...
          • I guess it was too early to expect that the researchers would have developed a reliable humor-recognition heuristic.
            • by snadrus ( 930168 )

              To be accurate, it would need
              the same input inaccuracies (pi and pie verbally being the same), combined with
              the order of learned experiences which influences weighting (pie before pi), combined with
              a 4-year-old's limited capacity for context, combined with
              need (eat) & desire (sweet foods).

              We have a ways to go.

      • Re:But (Score:4, Funny)

        by Laxori666 ( 748529 ) on Thursday July 18, 2013 @10:14AM (#44317865) Homepage
        That's how we lost ol' Timmy. Asked him to give us a few digits of that good ol' pi and he just durn went and hung himself right in the ol' barn. We stopped teachin' the young'uns maths after that lil' incident.
  • Misleading crap (Score:5, Insightful)

    by Anonymous Coward on Thursday July 18, 2013 @09:20AM (#44317303)

    We are nowhere near getting an AI that can navigate the world at the level of a 4 year old. All the program can do is simple tasks in vocabulary and such with no real understanding of those words. Nothing to see here.

    • by Doug Otto ( 2821601 ) on Thursday July 18, 2013 @09:21AM (#44317321)
      Sounds like my ex-wife.
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        And you married her. How smart does that make you? About as smart as someone linking his FaceBook account to his Slashdot account.

    • A 4 year old is so far ahead of any current AI they shouldn't even think of making this kind of comparison. It would be a Nobel prize winning feat to produce AI that can operate at the level of a 6 month old, let alone something that can walk around, talk, learn, imagine, and play.

      • I had a six month old a little more than a year ago. They don't do much at six months other than eat, sleep and grow, my daughter wasn't even interested in toys until around seven to eight months. All the interesting stuff starts taking place around one year when they start learning things like to crawl/walk and mimic sounds. About 18 months they start listening to simple instructions and saying actual words. Development progress not guaranteed and may vary, side effects may include poop on walls, pink eye,
        • The ability to look around, recognize faces, react to and locate sounds, control the behaviour of parents with crying, and take objects and put them in its mouth as good as a six month old would be Nobel prize winning AI.

          • With the proper sensors they can look around, recognize faces and react to an locate sounds. I'm not sure if giving a machine a mouth so it can shove random objects into it is really Nobel prize worthy...
        • Wait a few more years and she may be scripting LUA in Roblox! My 10 year old son has been doing this for the last year with no help from me!
          • Yeah, I'm excited. I was programming at seven when my Dad gave me his old Atari 130XE to play with. I can't wait to see what she'll do since computers are so cheap their practically throw away items now.
    • I think the point is, like in a real child's development this is a stepping stone, it's something A.I. has to go through in order for it to mature.

      I never understood why people think a A.I. should learn any faster than a real child could. It's like people think because it's a computer it automagically knows everything there is ever to know, but in reality A.I. still requires training and positive/negative reinforcement just like really children do.
      • I agree, but after a program has learned for a while, you can make a copy of it. So if it takes 4 years to teach the program, it doesn't mean it takes 4 years for every copy of the program.

        • I see the issue with copying the A.I. is you end up with the breeding weakness issue. By developing A.I. and not copying it you can create a host of experiences for different version of the A.I. as opposed to making sure all A.I. has the same experiences. That allows the A.I. to be more human in that each individual A.I. can contribute something to a collective that other's haven't experienced. I know it sounds silly, but one may have more success at recognizing an abstract dog (think cartoon) because of a
          • It makes sense that you would want M versions of the program, but once you have some that work well enough for something you want to do, you can still create copies of those versions.

            Want more diversity? Get real 4 year olds. Don't forget snacks.

          • by narcc ( 412956 )

            You say that like we have anything remotely like the kind of AI implied by the summary.

            We're not even close. Hell, we don't even understand the problem at the simplest level. (Hint: The GP's 6-month-old comment was spot on.)

      • >I never understood why people think a A.I. should learn any faster than a real child could. I think that's because we're used to computers processing data much more quickly than we do. This probably wouldn't be true to early AIs, but we judge things on our personal experience, and we have no experience with AIs, so we go to the closest related thing, the computer. If we could create a super-intelligent AI, it could scan the Internet to bring itself up to speed pretty quickly, and would probably be re
      • by Kjella ( 173770 )

        I never understood why people think a A.I. should learn any faster than a real child could. It's like people think because it's a computer it automagically knows everything there is ever to know, but in reality A.I. still requires training and positive/negative reinforcement just like really children do.

        Because computers typically is much faster than me at doing something, they can read Wikipedia faster than I could read an A4 page, my math speed is measured in seconds per floating point operation not the other way around and it could query huge database much faster than I could find the index cards at the library, much less find anything. What they're short on is the ability to comprehend and learn, not process. If they're so slow it's because they're waiting for humans to give them feedback or tweak thei

    • Re:Misleading crap (Score:5, Insightful)

      by ebno-10db ( 1459097 ) on Thursday July 18, 2013 @09:33AM (#44317439)

      We are nowhere near getting an AI that can navigate the world at the level of a 4 year old. All the program can do is simple tasks in vocabulary and such with no real understanding of those words. Nothing to see here.

      The headline is the usual attention grabbing junk, but the article itself does a decent job of explaining it:

      Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.

      “But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.

      One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.

      Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.

      “All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled. Life is a rich learning environment.”

      IQ tests mean little enough for a human being, for AI they're little more than cute. Most 4 year old's know if someone is mad at them (expression, tone of voice, etc.) and, from past experience, often know why someone is mad at them. They're also clever enough to pretend they don't know why someone is mad at them. Most importantly (and practically), they know to start acting cute before somebody kills them. Let me know when an AI program can do that.

      P.S. This is not to disparage the AI work, just to keep things in perspective.

      • Ultimately the source of all our information comes from sensory input. That's how we know ice is cold, that things fall, some things hurt, others are pleasurable, etc. On top of that sensory data we construct an intelligent (symbolic) representation of the world, in tandem with a language (or several languages) which we share with others and can thus use to exchange ideas.

        What AI researchers seem to be doing is skipping the sensory input part, because it's hard, and just trying to codify the intelligent r
        • by Anonymous Coward

          I've often had thoughts along similar lines: we expect AI systems to be good after a few days or weeks of learning and are surprised they can't match what humans can do after years of learning. There are some projects that do long-term learning like NELL [wikipedia.org] which continually reads text from web pages and uses what it has learned already to better understand the text and therefore learn more from it. Also, maybe it's not quite sensory, but a lot of (most?) AI research these days is in machine learning [wikipedia.org], which is

        • by narcc ( 412956 )

          What AI researchers seem to be doing is skipping the sensory input part, because it's hard

          The sensory input part is easy. It's the subjective experience part that's hard. Well, calling it hard is misleading; it's impossible with purely computational approaches.

      • Most importantly (and practically), they know to start acting cute before somebody kills them.

        Is this true? I agree with most of what you say, but I feel like when people are really angry at kids, the kids just get scared, not cute

        • I feel like when people are really angry at kids, the kids just get scared, not cute

          The trick is to get cute just before the adults hit that stage. As to whether this approach works, the proof is that my daughter is still alive.

      • by Trepidity ( 597 )

        Most importantly (and practically), they know to start acting cute before somebody kills them. Let me know when an AI program can do that.

        Brb, writing a grant proposal, "KAW-AI-I: Towards technologies for emotionally manipulative artificial agents".

    • A four-year-old can project and read emotions with surprising sophistication (especially the project part), understand and communicate with spoken language, climb around and use motor skills for pretty complex tasks, and about a million other things that no one AI is even close to being able to do. Even the best AI's still struggle with such tasks *individually*, much less as a whole package. My daughter could understand plenty of things at 4 that would give Siri a fit, and Siri isn't even potty trained!

    • by tmosley ( 996283 )
      Yeah, I had a sudden jolt of existential terror when I read the headline. All was calm again after reading the article.

      For those NOT utterly terrified at the concept of a strong AI, I suggest you read some of Yudkowsky's writings on the subject here. [yudkowsky.net]
      • by narcc ( 412956 )

        There's no reason to worry. Computational approaches are all doomed to failure. We can only assume that the researchers are well aware of this well-established fact and that headlines like these are little more than a gag to land some free press.

        As for the singularity nuts, well, they might as well be writing about the pending war between the grays and the lizard men in the hollow earth. It's just as silly.

  • by shadowrat ( 1069614 ) on Thursday July 18, 2013 @09:22AM (#44317325)
    They didn't assess how intelligent this AI is. They assessed the IQ test and found it to be a poor indication of intelligence.
    • by tmosley ( 996283 )
      A poor indication of MACHINE intelligence. Which is natural, as it was not meant to test machine intelligence. It's like testing a car according to a children's physical fitness test. It will excel wildly in some areas, while falling completely flat in others.
  • Let me get started:

    * IQ tests don't measure intelligence
    * IQ tests only measure a certain *type* of intelligence.
    * Your jealous because I have an IQ of -2147483648.
    * I'm too smart for IQ tests.
    * You're book smart but I'm street/code smart
    * random troll at -1

    Have I missed anything?

    • Re: (Score:3, Funny)

      by MugenEJ8 ( 1788490 )
      Yup... an apostrophe.
    • You missed that IQ should be determined purely by video game abilities. Crack a door lock, solve a puzzle, do some math at an NPC vendor, shoot some people in the head in a logical fashion and tada, 150 IQ.
    • You have an IQ of 0xFFFFFFFF80000000?

    • Yes. Vocabulary does not indicate intelligence, merely memory and language exposure and its continued use as an indicator shows how flawed IQ tests are.

      • Yes. Vocabulary does not indicate intelligence, merely memory and language exposure and its continued use as an indicator shows how flawed IQ tests are.

        Don't feel too bitter, MENSA rejected me, too.

  • The link seems to point to ConceptNet 5 now.
    If they re-run their IQ test, I think they will gleefully find it is now as smart as a 5yr old.


  • While I am excited about advancements in AI it makes one wonder what is the use of such scores beyond some marketing?

    IQ is debunked. It's not a true measure of intelligence. If anything it can measure of much a person is willing to invest (time/effort) in scoring well on said test.

    Compared to other children the scores vary wildly unlike any normal child.

    While it's still an achievement to have a sophisticated program worthy of an "AI" label we are, unfortunately nowhere near true AI.
  • But unlike most children, the machine's scores were very uneven across different portions of the test.

    That's not a bug, that's the beta version of GPP [wikia.com].

  • by Anonymous Coward

    These tests don't tell us much about the power of an AI and here is why. If you give a human test with a million questions, then giving one more question is not going to tell you much more. You could probably remove some of the questions too without removing much information about how smart the person is. It turns out some of the questions are much more valuable when it comes to figuring how smart someone is. If you put enough statistics work into that, you'll be able to condense those million questions int

  • by puddingebola ( 2036796 ) on Thursday July 18, 2013 @10:12AM (#44317855) Journal
    Unfortunately the AI also lied that it had completed its arithmetic assignment so that it could go out to recess early. It is also suspected of taking an extra snack at snack time, and caused a disturbance during nap time.
  • Humans are not programmed to be intelligent. Intelligence is just an aspect of how the brain (specifically the cortex) works.

    Even very stupid children or other mammals can learn to do things like catch a ball, walk, remember someone's face or voice or that they had spaghetti for dinner. AI is concentrating on trying to duplicate the wrong types of behavior, starting from the wrong end. It doesn't tell us anything useful about humans or intelligence.

  • by Capt.Albatross ( 1301561 ) on Thursday July 18, 2013 @11:10AM (#44318523)

    "ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions.” - Robert Sloan, lead author of the study.

    This comment strengthens my feeling that current AI is making progress in faking many of the accidental attributes of intelligence, but has not discovered the essence.

    The development of childrens' mental abilities seems to accelerate over time, as if there is positive feedback, but this does not seem to have emerged in AI yet, especially if we factor out Moore's law. On the contrary, any given exercise in developing AI through machine learning seems to hit a wall of diminishing returns at some point. Is anyone aware of a project that has not experienced this effect?
     

  • Anyone knows where to access technical information about the actual study, or how did they conducted the IQ test? ConceptNet is just a database + a library with some NLP parsing tools and database (the concept hypergraph) accessors, but I wonder how did they actually conducted the test as that doesn't seem to be a trivial extension of the available tools...

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...