Forgot your password?
typodupeerror
It's funny.  Laugh. Science

Russian Chatbot Passes Turing Test (Sort of) 236

Posted by CmdrTaco
from the would-you-like-to-play-a-game dept.
CurtMonash writes "According to Ina Fried, a chatbot is making the rounds that successfully emulates an easily-laid woman. As such, it dupes lonely Russian males into divulging personal and financial details at a rate of one every three minutes. All jokes aside — and a lot of them come quickly to mind — that sure sounds like the Turing Test to me. Of course, there are caveats. Reports of scary internet security threats are commonly overblown. There are some pretty obvious ways the chatbot could be designed to lessen its AI challenge by seeking to direct the conversation. And finally, while we are told the bot has fooled a few victims, we don't know its overall success rate at fooling the involuntary Turing "judges.""
This discussion has been archived. No new comments can be posted.

Russian Chatbot Passes Turing Test (Sort of)

Comments Filter:
  • by file-exists-p (681756) on Sunday December 09, 2007 @11:58AM (#21631389)

    A decade ago I wrote a perl script for sirc that had 40 sentences and would just reply one picked at random (uniformly) every time it would get a private message. Hence it was not taking into account neither what was the message it just received to it (a la Eliza) nor what it had said before. It was not even waiting before replying, hence would type the respones in a tenth of a second.

    It happened several times that people would talk with it for more than an hour. If I remember correctly the record was 1h45min ...

    For the Turing test, the tester has a strong prior that the testee may be a computer. This is not the case here, and the prior for this to happen is so low that it's impossible for a layman to come with that explanation. What happens is that people think inconsistencies in the speech of their interlocutor is due to technical problems (sending message to the wrong person, lag, complexity of the program the person use, etc.)

  • Actually the Communist Party is pretty much the only party in Parliament(not counting the ones who aren't in Parliament, like Yabloko and Kasparov's party) that opposes Putin.
  • Re:Eliza says- (Score:2, Interesting)

    by maxwell demon (590494) on Sunday December 09, 2007 @12:54PM (#21631853) Journal
    That reminds me of a joke I've read quite some time ago (well, it was actually with images, but the text basically covers the funny part). It's a conversation:

    "Nice weather."
    "Yes, nice weather."
    "It might rain this afternoon."
    "Rain? You think so?"
    "You're elizing again!"
  • by SatanicPuppy (611928) * <Satanicpuppy&gmail,com> on Sunday December 09, 2007 @01:07PM (#21631957) Journal
    Yea. This really isn't all that Turing-worthy due to the targeting...This is a group of people who really wants the person on the other end to be attractive, female, horny, and above all else, real. Even if it's not perfect, they'll be more willing to believe.

    On top of that, there is the whole chat medium. Anyone who has ever done a lot of IM/IRC/whatever knows that it's not uncommon to type the wrong thing in the wrong window/channel, so the occasional out of nowhere sentence that would never pass in a one-on-one environment, will pass there because the signal to noise ratio is lower.

    Still, I'd be interested to see the code, and see how well it deals with non sequiturs.
  • by nukeade (583009) <<moc.liamtoh> <ta> <11tnepres>> on Sunday December 09, 2007 @02:03PM (#21632419) Homepage
    Several years ago when I was a bored college student, my roommate and I thought it would be funny to write a convincing chat-bot and see what misadventures it had. The AI was extremely simple. It kept a database of everything people had said to it, and considered those things 'related' to the last three things said in the conversation. By searching the database for key words in the last three things said in the current conversation, it would match it to the response judged most relevant by another human in past conversations. We seeded him with some of our own conversations.

    To plant him, we simply made a free page on some blog with some personal details and put his IM up there and waited to see what happened.

    We eventually shut him down because people were becoming way too personal with him. One girl had an ongoing series of conversations with him about how she was recently raped. His mouth became rather foul when my roommate decided to have him initiate a conversation (he had a whitelist of known 'admin' screen names who could then order him to say something specific to a specific screen name) with screen names linked to hate groups. Another guy just wanted to convert him to evangelical Christian. It was way too simple to write a bot to make many, many people think is real. Some people did figure it out, so if someone ever brought up 'bot' in a conversation they were immediately added to a blacklist so as not to corrupt the conversation database.

    The biggest giveaways? "u type too fast" (we eventually added a delay to solve that issue) and "u only type something when I do" (by this time I had already decided it was time to shut down the bot for good). It was a lot of fun until he started hurting people... if I ever resurrect him he will have a pre-set kill limit. :)

    ~Ben
  • by Anonymous Coward on Sunday December 09, 2007 @04:12PM (#21633409)
    that isn't the same as passing the Turing test, which requires that the examiner be conversing with a human and a computer at the same time, to be fully conscious of this fact, and to be deliberately trying to determine which is which.

    I take it you've never actually been in a chat room. What you just described (and what Turing originally devised) is exactly what happens in every chat room I've ever been in. The room is filled with 40-50 people. Some of whom are men, some women and some bots. The goal for the man is to ignore the other men and block the bots. Going into it, every man KNOWS these are the rules of the game. Bots posing as women.

    I haven't been to a chat room in a few years now but back in 2003 or 2004, I was fooled by a chat bot for a good 15 minutes or so. I was initially suspicious, then less wary, then convinced I was talking to a human before something gave it away. That's when I realized that chat bots already HAD passed the Turing test, exactly as designed.

    Think of their value -- not just in scamming people out of credit cards. But all the consumer products which will one day have a human like interface for people... chat bots are that proto-AI interface.
  • by Pollardito (781263) on Sunday December 09, 2007 @05:16PM (#21634107)
    that reminds me of a funny story. i wrote a bot several years ago that would send tells to people at random from a set of messages. if anyone replied to them, it'd send another message...as a matter of fact, it'd keep doing this until they stopped replying. they were really nonsense sentences, so most people ignored them from the start but even those that didn't quickly got the idea when it cycled back on the same message more than once. one time though i remember this one guy replying back to this bot as if it was a real person for almost 2 hours!
  • Intention (Score:4, Interesting)

    by mutube (981006) on Sunday December 09, 2007 @06:18PM (#21634739) Homepage
    I think what this story (and your post) show is that if people come to a conversation with a particular intention, they are more open to being hoaxed. For example, you have a girl who is wanting to talk to someone about what's happened to her, or a Christian who wants to convert people, or a Russian who wants to get laid. In each of those cases people are probably too focused on getting what they want to notice inconsistencies. In other works: distracted people are dumber.
  • by brassman (112558) on Sunday December 09, 2007 @08:21PM (#21635781) Homepage
    The screenshot in TFA pretty clearly indicates the bot is masquerading as a male and is targeting women rather than men.

    Somehow I find that idea even more disturbing.

ASCII a stupid question, you get an EBCDIC answer.

Working...