Russian Chatbot Passes Turing Test (Sort of) 236
CurtMonash writes "According to Ina Fried, a chatbot is making the rounds that successfully emulates an easily-laid woman. As such, it dupes lonely Russian males into divulging personal and financial details at a rate of one every three minutes. All jokes aside — and a lot of them come quickly to mind — that sure sounds like the Turing Test to me.
Of course, there are caveats. Reports of scary internet security threats are commonly overblown. There are some pretty obvious ways the chatbot could be designed to lessen its AI challenge by seeking to direct the conversation. And finally, while we are told the bot has fooled a few victims, we don't know its overall success rate at fooling the involuntary Turing "judges.""
And therein lies the fun part. (Score:2, Insightful)
The ever-rising bar on true AI (Score:4, Insightful)
Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall. Human intelligence is very slowly growing (they actually reset IQ tests every decade or so) but computer intelligence is growing much much faster.
Re:WTF? This is not even a Turing test. (Score:5, Insightful)
The key part of the turing test, to me, is that the judge must know they are engaged in the test. The best example of this is Eliza (read about it [wikipedia.org]). To someone critically examining it, it does not past the turing test. To someone expecting a therapist, most of its responses do make sense. The point is that if you're not trying to trip up the chatbot it's not hard to fool someone.
Re:The ever-rising bar on true AI (Score:5, Insightful)
Indeed, it's easy to show that fooling some people some of the time doesn't require anything even approaching AI. Consider a bot that simply repeats a set of ten sentences in a fixed order: if those sentences were chosen well enough, then some people might easily believe that they were having a real conversation. But I really don't think you'd argue that a bot that simply repeated a set of ten sentences in a fixed order displays any sort of intelligence, no matter how many unsuspecting people happen, by random chance, to feed it lines that cause its responses to look relevant.
the problem is the user (Score:3, Insightful)
That's not a good test for AI. Research shows that men go crazy while talking with beautiful women. So, sexuality temporarily shuts down their intelligence. You can't test for AI while employing sexuality.
Turing probably was not serious about this test (Score:5, Insightful)
Turing was a mathematician, which came through in all his thinking, including devising the Turing Test. When faced with questions like "can a machine ever be intellignt?" it is virtually impossible to answer this directly because, firstly, how do you define intelligence; and secondly,how do you measure intelligence?
Mathematicians **hate** imprecise questions because they cannot be proven or answered satisfactorily.
When faced with this problem, Turing used the well loved mathematical method of reductio ad absurdum: if you cannot tell the difference between a human and a machine, then it is absurd to claim the human is intelligent but the machine is not. That neatly sidesteps all the impossible to answer questions like the precise definition of intelligence. Typical mathematician wriggle out move.
Is the Turing Test practical? Well perhaps not. Machine intelligence (whatever that means) can be useful without the machine holding a conversation with you. Annoyingly it has soaked up a lot of effort with people building talkbots instead of getting on with more practical aspects of machine intelligence.
Re:This test is very easy (Score:2, Insightful)
It works both ways
Re:Correction. (Score:5, Insightful)
I would hate to live in your world. I value compassion as highly as intelligence, and letting somebody suffer and die while you are able to help, on some aloof philosophical or eugenic basis is simply inhumane. Is there anybody in your life that you care about? Would you try to help them if they were suffering? Would you care for them less if they'd made a mistake and their suffering was in some way self-inflicted?
You should value people based on what they are or what they can give, not on what they are not or what they lack.