Become a fan of Slashdot on Facebook


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

It's funny.  Laugh. Science

Russian Chatbot Passes Turing Test (Sort of) 236

Posted by CmdrTaco
from the would-you-like-to-play-a-game dept.
CurtMonash writes "According to Ina Fried, a chatbot is making the rounds that successfully emulates an easily-laid woman. As such, it dupes lonely Russian males into divulging personal and financial details at a rate of one every three minutes. All jokes aside — and a lot of them come quickly to mind — that sure sounds like the Turing Test to me. Of course, there are caveats. Reports of scary internet security threats are commonly overblown. There are some pretty obvious ways the chatbot could be designed to lessen its AI challenge by seeking to direct the conversation. And finally, while we are told the bot has fooled a few victims, we don't know its overall success rate at fooling the involuntary Turing "judges.""
This discussion has been archived. No new comments can be posted.

Russian Chatbot Passes Turing Test (Sort of)

Comments Filter:
  • by DaedalusHKX (660194) on Sunday December 09, 2007 @11:37AM (#21631265) Journal
    My point is proven yet again, that the vast majority of humanity lacks the simple survival skills that would make us worthy of propagating and passing on our genes... evolving and surviving, if you would. To me this simply proves that the vast majority, male or female, wholly obedient and completely brainwashed to ONLY see what is in front of them, is truly the greatest curse of mankind. Its ready obedience, nay not obedience, but plain WORSHIP of authority. Authoritarianism has been a curse, and every time its signs show, nobody cares to take note. The masses get what they deserve for not thinking for themselves. In this situation, whoever gets duped, IMHO, gets their JUST DESERTS!!
  • by G4from128k (686170) on Sunday December 09, 2007 @11:54AM (#21631359)
    The debate about chatbot appears to be part of the ever rising bar placed against AI. This chatbot has won the Turing test for a segment -- perhaps a gullible/dumb segment -- of the human population. Yet still people argue that it does not really count. This is analogous to the "computers can never beat people at chess" meme formed at the dawn of the computing age. And when the first programs did beat some people, the meme changed to "computers can never beat experts at chess." And when computers got better, the meme changed to "computers can never beat the top-ranked humans at chess." That barrier, too, has been breached.

    Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall. Human intelligence is very slowly growing (they actually reset IQ tests every decade or so) but computer intelligence is growing much much faster.
  • by cbart387 (1192883) on Sunday December 09, 2007 @12:14PM (#21631491)
    It is not a turing test. A turing test is when the the judge is trying to figure out if the 'chatbot' is a human or AI program. This story is about people under the assumption that it is a human.

    The key part of the turing test, to me, is that the judge must know they are engaged in the test. The best example of this is Eliza (read about it []). To someone critically examining it, it does not past the turing test. To someone expecting a therapist, most of its responses do make sense. The point is that if you're not trying to trip up the chatbot it's not hard to fool someone.
  • by Haeleth (414428) on Sunday December 09, 2007 @12:20PM (#21631535) Journal
    Why is this post being modded up? It's a lovely example of the straw-man fallacy, but that hardly deserves Insightful moderations.

    This chatbot has won the Turing test for a segment -- perhaps a gullible/dumb segment -- of the human population.
    No it hasn't. It has convinced a gullible/dumb and unsuspecting segment of the human population that it is a human, which is not unimpressive in its own right, but that isn't the same as passing the Turing test, which requires that the examiner be conversing with a human and a computer at the same time, to be fully conscious of this fact, and to be deliberately trying to determine which is which.

    Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall.
    Um, no. Nobody with a clue has ever claimed that a chatbot that is capable of convincing any human being whatsoever that it is a human represents true AI. The bar has always been set at fooling Turing-test judges, and the Turing test has been fixed in its current form for decades.

    Indeed, it's easy to show that fooling some people some of the time doesn't require anything even approaching AI. Consider a bot that simply repeats a set of ten sentences in a fixed order: if those sentences were chosen well enough, then some people might easily believe that they were having a real conversation. But I really don't think you'd argue that a bot that simply repeated a set of ten sentences in a fixed order displays any sort of intelligence, no matter how many unsuspecting people happen, by random chance, to feed it lines that cause its responses to look relevant.
  • by wikinerd (809585) on Sunday December 09, 2007 @01:13PM (#21632015) Journal

    successfully emulates an easily-laid woman.

    That's not a good test for AI. Research shows that men go crazy while talking with beautiful women. So, sexuality temporarily shuts down their intelligence. You can't test for AI while employing sexuality.

  • by EmbeddedJanitor (597831) on Sunday December 09, 2007 @01:24PM (#21632115)
    If you've studied Turings work much, you'd probably come to the conclusion that he never seriously proposed that the Turing Test would be a practical way to test machine intelligence.

    Turing was a mathematician, which came through in all his thinking, including devising the Turing Test. When faced with questions like "can a machine ever be intellignt?" it is virtually impossible to answer this directly because, firstly, how do you define intelligence; and secondly,how do you measure intelligence?

    Mathematicians **hate** imprecise questions because they cannot be proven or answered satisfactorily.

    When faced with this problem, Turing used the well loved mathematical method of reductio ad absurdum: if you cannot tell the difference between a human and a machine, then it is absurd to claim the human is intelligent but the machine is not. That neatly sidesteps all the impossible to answer questions like the precise definition of intelligence. Typical mathematician wriggle out move.

    Is the Turing Test practical? Well perhaps not. Machine intelligence (whatever that means) can be useful without the machine holding a conversation with you. Annoyingly it has soaked up a lot of effort with people building talkbots instead of getting on with more practical aspects of machine intelligence.

  • by Anonymous Coward on Sunday December 09, 2007 @01:53PM (#21632335)
    What makes you think all the people that talked to your bot were humans ?

    It works both ways ;)
  • Re:Correction. (Score:5, Insightful)

    by stranger_to_himself (1132241) on Sunday December 09, 2007 @05:58PM (#21634535) Journal

    I would hate to live in your world. I value compassion as highly as intelligence, and letting somebody suffer and die while you are able to help, on some aloof philosophical or eugenic basis is simply inhumane. Is there anybody in your life that you care about? Would you try to help them if they were suffering? Would you care for them less if they'd made a mistake and their suffering was in some way self-inflicted?

    You should value people based on what they are or what they can give, not on what they are not or what they lack.

You will have a head crash on your private pack.