Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI IBM Science Technology

IBM's Debating AI Just Got a Lot Closer To Being a Useful Tool (technologyreview.com) 24

We make decisions by weighing pros and cons. Artificial intelligence has the potential to help us with that by sifting through ever-increasing mounds of data. But to be truly useful, it needs to reason more like a human. An artificial intelligence technique known as argument mining could help. From a report: IBM has just taken a big step in that direction. The company's Project Debater team has spent several years developing an AI that can build arguments. Last year IBM demonstrated its work-in-progress technology in a live debate against a world-champion human debater, the equivalent of Watson's Jeopardy! showdown. Such stunts are fun, and it provided a proof of concept. Now IBM is turning its toy into a genuinely useful tool. The version of Project Debater used in the live debates included the seeds of the latest system, such as the capability to search hundreds of millions of new articles. But in the months since, the team has extensively tweaked the neural networks it uses, improving the quality of the evidence the system can unearth. One important addition is BERT, a neural network Google built for natural-language processing, which can answer queries. The work will be presented at the Association for the Advancement of Artificial Intelligence conference in New York next month.

To train their AI, lead researcher Noam Slonim and his colleagues at IBM Research in Haifa, Israel, drew on 400 million documents taken from the LexisNexis database of newspaper and journal articles. This gave them some 10 billion sentences, a natural-language corpus around 50 times larger than Wikipedia. They paired this vast evidence pool with claims about several hundred different topics, such as "Blood donation should be mandatory" or "We should abandon Valentine's Day." They then asked crowd workers on the Figure Eight platform to label sentences according to whether or not they provided evidence for or against particular claims. The labeled data was fed to a supervised learning algorithm.

This discussion has been archived. No new comments can be posted.

IBM's Debating AI Just Got a Lot Closer To Being a Useful Tool

Comments Filter:
  • Just another Circle Jerk?

    • It could replace every member of Congress. A whole bunch of AI's might get somthing done.
      • by Tablizer ( 95088 )

        It could replace every member of Congress. A whole bunch of AI's might get something done.

        Or at least foul up society in a more cost effective way.

    • Hey, you know what else is a useful tool?

      It's you!

    • I'd give you the funny mod if I ever had a point to give. Still a cheap shot at the computers. This comment is my heavier one:

      An AI would have an enormous advantage over human debaters precisely because it is completely ignorant of reality and has no concern with such trivialities as "truth, justice, and the American way". My evidence is mostly anecdotal, partly on my own experiences as a debater and from observing other debaters.

      In my own experience, I was a pretty good debater. I only debated for a couple

  • by DatbeDank ( 4580343 ) on Wednesday January 22, 2020 @04:31PM (#59645438)

    Anyone remember Microsoft's Tay and how trolls trained it to show some uncomfortable truths?
    I wonder how this, "debating" AI will handle uncomfortable truths about immigration and criminality.

    • by Shotgun ( 30919 )

      IBM won't let user ask it questions like, "Can a man have a baby?" IBM know that each letter of its name will get moved back one in the alphabet, and we've already learned what happens when you make computers lie.

      I->H
      B->A
      M->L

  • by kubajz ( 964091 ) on Wednesday January 22, 2020 @04:34PM (#59645444)
    From TFA it seems like the AI cannot really explain how it picked those "50 top sentences to support the claim", neither can it logically connect the sentences or follow claims. That might work in a TV "debate" but would not be very useful for the problem of "not really thinking like a human". In other words, if one was hoping for AI to be able to logically explain its conclusions... then this is not the right AI.
    • by ljw1004 ( 764174 )

      From TFA it seems like the AI cannot really explain how it picked those "50 top sentences to support the claim", neither can it logically connect the sentences or follow claims. That might work in a TV "debate" but would not be very useful for the problem of "not really thinking like a human". In other words, if one was hoping for AI to be able to logically explain its conclusions... then this is not the right AI.

      The logical connection that's missing from your sentence is your assumption that humans use logical connections in their thinking and arguments. They don't. Thus, the implicit conclusion of your post is that this AI thinks and argues like a human.

      • by Tablizer ( 95088 ) on Wednesday January 22, 2020 @05:46PM (#59645646) Journal

        The logical connection that's missing from your sentence is your assumption that humans use logical connections in their thinking and arguments. They don't. Thus, the implicit conclusion of your post is that this AI thinks and argues like a human.

        In professional debates, logic matters more. The judges in formal debates should be well-versed in logic and the common fallacies[1].

        "Off the street" humans on the other hand are much more susceptible to social based arguments, such as "those guys are liars, look at their shifty eyes". You know, the kind the orange guy masters. [cnn.com] Most humans are inherently social, but not inherently logical.

        There may be such thing as "social logic", but I haven't seen it carefully documented anywhere[2]. I wish it were, because being of the Asperger type, I need a refresher. It doesn't necessarily mean I will (mis)use it, but I'd like to at least recognize and better counter it.

        [1] There are "standard" fallacies that are not necessarily based on pure logic. "Slippery slope" is an example. Slippery slopes do happen and are called "positive feedback cycles", but is still usually considered a fallacy because both sides can claim it, and because the future is hard to predict. One can "safely" use it if they can identify a solid pattern or change force that's relevant.

        [2] Dale Carnegie's well-known book lays out some general rules, but it doesn't delve into the "negative" aspects much. Some of it is also obsolete in my opinion, such as using others' name often. Many are creeped-out by that.

        • Some of it is also obsolete in my opinion, such as using others' name often.
          Many are creeped-out by that.

          You will find people who are creeped out by other people who appear to be happy. So what?
          Most people love to hear their own name, because it
          establishes that the listener was paying attention when they met.

    • by Tablizer ( 95088 )

      Until the "bot" can create models of reality that humans can readily read, tune, and debug; I suspect mass pattern matching will hit a wall.

      By "model" I don't necessarily mean a physics model (although that may help some). It could be an abstract model such as a network diagram (node and pointer) of social relationships and related meta-info. And/or it could be "logic rules" similar to the Cyc database. If I were one of those big IT co's sitting on piles of cash, I'd buy the Cyc database ASAP. There's nothi

      • > Until the "bot" can create models of reality ... mass pattern matching will hit a wall.

        This. It's often overlooked that humans have the advantage of a complex environment. AI needs to be on similar footing to really 'understand', it needs embodiment to improve further
  • Cue Monty Python argument sketch in 3..2..1..

  • Yes, I'd be impressed if this could identify relevant evidence to support a cogent argument. But that's not the same thing as reasoning or employing logic. They're probably just recognizing statements as being relevant (via word2vec) and then constructive vs destructive via sentiment analysis. Then they refine their ranking of candidate evidence based on the credibility of the source. For now, I suspect they probably prefer positive fare and toss the negative.

    My first challenge to them: Can the system r

  • by JustAnotherOldGuy ( 4145623 ) on Wednesday January 22, 2020 @05:58PM (#59645678) Journal

    I would pay for this service so that when I argue, errr, I mean"debate" with my wife, it could do that for me whilst I go out back and have a beer or three. Hopefully it'll have extra big battery for those longer ummm "discussions".

    And really, if it loses the "debate" I'll be no worse off than before.

  • It is impossible to tell from a marketing presentation what they actually did, but it is most unlikely that a Artificial Neural Network was the only or even the main ingredient. There is much more to AI then ANNs, but that is beyond a journalist to understand.

    Just feed some training data into an ANN and magic happens. Not so. (Although ANNs are rather spooky in what they can achieve.)

  • The task at hand seems to be generating a lot of arguments and throwing out the weak ones.
  • Don't give me that, you snotty-faced heap of parrot droppings!
    Shut your festering gob, you tit! Your type makes me puke! You vacuous toffee-nosed malodorous pervert!!!

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...