Alan Turing's Prediction for the Year 2000 126
Chernicky writes "In 1950, Alan Turing , the father of computer science and (arguably) artificial intelligence, made a prediction about the year 2000. Turing said that in about fifty years, the answers of a computer would be indistinguishable from those of human beings, when asked questions by a human interrogator. With the year 2000 upon us, Dartmouth College is offering a $100,000 prize to the first programmer that can pass the Turing Test. The deadline for submissions is October 30, 1999. "
Not much time...must start...now... (Score:1)
Penrif
The First Programmer... (Score:1)
In all seriousness, have there been any previous projects that have passed Turing tests under conditions dictated by an independent third party?
-W-
I hope I can pass... (Score:4)
to the first programmer that can pass the Turing Test.
Ummm, I sure hope I can pass the Turing test. I know some of you out there might have problems passing it, but I'm pretty confident I can pass.
Of course if he meant 'first program' it might make more sense.
Already? (Score:1)
-----------
"You can't shake the Devil's hand and say you're only kidding."
How long of a turing test ? (Score:1)
Overall I would say that his is a pretty safe bet on their part . A little publicity gained with promises that almost certainly can't be cashed in on .
Your Squire
Squireson
Re:I hope I can pass... (Score:1)
Why am I thinking "Shooting Fish"?
Sorry can't be done. Yet. (Score:1)
I (rather arrogantly) believe that today's computer may be able to fool one persone but cannot fool multiple people.
-- Moondog
The Turing Test is no longer a goal of AI (Score:4)
The problem with the Turing Test is that it tries to make a computer human and that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful. (Maybe making a computer human is not all that useful
The problem is that the program only needs to pass 5 minutes worth of conversation. That's a pretty narrow goal. Technically, it's not really artificial intelligence at this point - it's just a ruse (however, it's still extremely diffucult to program natural language capabilities and have "common sense" -- two goals that are themselves not bad ones to do research in).
Douglas R. Hofstadter wrote an interesting article about this - he had a conversation with a program named Nicolai (I think). It was quite amusing - the program spits out some very interesting answers.
Anyway, no one has yet succeeded at this and if you feel you can get a program to imitate a human for 5 minutes, go right ahead. You'll earn that $100K
Woz
I think slashdot ate my comment... (Score:2)
Is www.forum2000.org a fake? or is it an honest-to-deity AI capable of answering questions in a lifelike manner?
Dan
Any good examples available on the Web? (Score:1)
demonstrate this kind of interactivity? Something
like Eliza, but better (I would assume)?
IRC and the humanity factor (Score:1)
Have the applicants join on a channel thats used a bit, say #hotjaurez or #3l3tn3ss and see how it fares in converstaion. Then have them , with nick changes, move over to a more constrained channle, like #mindvox or #youngpoetsinheat.
The truck would not only to be able to pickout the bots, but to pick out the humans as well.
Im bettng the bots would have better cahnce of being dubbed Human than many of the genetic slush bags.
just my tunie
Turing test? Nahh... (Score:3)
It was only after I myself had begun setting up a BBS that I came across this BBS door program. I don't remember what it was called, but it pretended to be a chat program. Basically, it responded to specified keywords with a random sentence from a huge flatfile database, and even pretended to have typooo^H^Hs from time to time.
I then realised that I'd been had!
Some sysop must have been laughing his ass off at this young kid who went by the handle "Orion", chatting away with a very crude AI and being suckered into it the whole way.
I look back on those days and wonder how I missed it. But it just goes to show you that, as much as you might be fooled by a computer, we've got a long way to go before we reach anything approximating independent thought. Personally I don't think it'll ever happen - but it might be neat to be proven wrong.
what's the point? (Score:3)
impressed with anyone who could pass the turing
test.
however, how much further does this really get us
than building a computer which can beat kasparov
in a (relatively) high speed chess match. chess
seemed like a big thing to teach a computer once,
but it has been relegated to the relatively
trivial now.
it seems to me that a program which passes the
turing test may well fall into the same category.
(i am assuming here that the program merely
appears to be having a conversation- that it is
not a language _understanding_ system.) it would
simply become something that people would set
loose in chatrooms, or attach to old unwanted
e-mail accounts, and watch the fun.
what i'd like to see is someone tackling a truely
significant problem. like programming a computer
to be able to vaccume your house.
does the goal influence the course?? (Score:1)
similarly how does RSA's challenges influence the encryption community.
This is just what I've been waiting for! (Score:1)
Hello, I will be your doctor today. (Score:2)
(how\'s it going) (how\'s it going eh) (how goes it)
(whats up) (whats new) (what\'s up) (what\'s new)
(howre you) (how\'re you) (how\'s everything)
(how is everything) (how do you do)
(how\'s it hanging) (que pasa)
(how are you doing) (what do you say)))
(setq qlist
'((what do you think \?)
(i\'ll ask the questions\, if you don\'t mind!)
(i could ask the same thing myself \.)
(($ please) allow me to do the questioning \.)
(i have asked myself that question many times \.)
(($ please) try to answer that question yourself \.)))
(setq foullst
'((($ please) watch your tongue!)
(($ please) avoid such unwholesome thoughts \.)
(($ please) get your mind out of the gutter \.)
(such lewdness is not appreciated \.)))
(setq deathlst
'((this is not a healthy way of thinking \.)
(($ bother) you\, too\, may die someday \?)
(i am worried by your obsession with this topic!)
(did you watch a lot of crime and violence on television as a child \?))
)
(setq sexlst
'((($ areyou) ($ afraidof) sex \?)
(($ describe)($ something) about your sexual history \.)
(($ please)($ describe) your sex life \.\.\.)
(($ describe) your ($ feelings-about) your sexual partner \.)
(($ describe) your most ($ random-adjective) sexual experience \.)
(($ areyou) satisfied with (// lover) \.\.\. \?)))
(setq stallmanlst '(
(($ describe) your ($ feelings-about) him \.)
(($ areyou) a friend of Stallman \?)
(($ bother) Stallman is ($ random-adjective) \?)
(($ ibelieve) you are ($ afraidof) him \.)))
NRO Neural Nets (Score:1)
OK! I've got it! (Score:1)
==================================
neophase
Blade Runner (semi offtopic, I know) (Score:2)
Place the subject in front of an interrogator and try to provoke an emotional response, indicating humanity. Sufficiently advanced replicants are good at fooling the test ("Rachel took nearly 50 questions") but to date all replicants are distinguishable from humans.
Seems pretty allegorical to me. What was the test called in the film? Who was that doctor / scientist? Would he have been eligible for the reward?
Man I want to see that movie again now...
Other thoughts, since I'm on a tangent: how about a program that can seem more real than Zippy the Pinhead? (Shouldn't be too difficult.) Or one that is less boneheaded than the average Slashdot AC poster? (Shouldn't be too difficult.) Sounds like it's time to get coding...
Hmmm... (Score:2)
Re:Blade Runner (semi offtopic, I know) (Score:1)
Re:I think slashdot ate my comment... (Score:2)
Gawd.. this thing is friggin funny!
Just look at the list of "recent questions" or whatever they are..
--
I think we're close (Score:1)
--
Re:Turing test? Nahh... (Score:1)
However, with the Turing Test, the judges know some of the entries are not humans and thus ask questions that would indeed flush out a computer based on the responses. They are looking for the culprit because they are told one is there.
Ignorance really can be bliss, can't it?
Woz
Another thought... (Score:2)
This raises yet another issue with the test -- a human can very easily give responses like a computer, thus fooling the judges. Is that fair? Maybe some humans are like computers with their answers.
In fact, one time a participant was talking about Shakespeare, and was a complete expert on the subject. The human jugde was conviced he was a computer because his answers were so exact!
Yet another problem....
Woz
The answer is obvious... (Score:2)
Re:The Turing Test is no longer a goal of AI (Score:1)
Re:The Turing Test is no longer a goal of AI (Score:1)
Computer vision is a decent test, but it has to be under such tight constraints. Other senses aren't worth the time, so we are left we only a few options.
Intelligence: Problem Solving
Have the computer tackle problems that are slight deviations from known ones with known solutions.
Creativity: Problem Solving with a twist
Have the machine solve a problem, and display a logical progression of the solution. The path must be more than just a search in all possible answer space.
Abstract: Problem Solving with no discrete answers
Have the computer tackle the 5 people in a 4 person life boat problem... who stays, who goes.
Gullibility: Give the computer the ability to believe some of what it is told, with out question, but also have it try to question and investigate false claims.
Reverse Turing test:
Computer takes the place as the moderator and tries to decide who it is talking to. Some factors would need to be ruled out, such as spelling and punctuation.
Any other ideas for good AI benchmarks?
The benchmarks have to be there to encourage funding and some research, so some test needs to be decided as a standard.
Re:Blade Runner (semi offtopic, I know) (Score:1)
Re:Blade Runner (semi offtopic, I know) (Score:1)
The Turing Test -- real experience. (Score:5)
I must say that I was rather embarassed at being thought a 'bot, and immediately denied it -- at which point the other player said, "OC: Well, it is really believeable -- see how it even denied it was a 'bot? Whoever wrote it was good."
Loebner Prize (Score:3)
From what I read, most people working in AI don't treat them as something worth while. It's fairly obvious that programs won't be able to pass the turing test for some time (decades, maybe centuries), and the results of such tests only make it less likely that people working on valid AI projects will be taken seriously.
The Loebner Prize has it's own homepage [loebner.net]. Chech out the transcripts of the conversations. The most 'clever' programs simply look for keywords, some insist on asking all the questions, some are 'whimsical' and use metaphores while constatnly switching topics, none show any comprehension whatsoever.
Re:The Turing Test is no longer a goal of AI (Score:1)
Creativity: Problem Solving with a twist
These are very good goals, and sometimes very difficult to achieve (for instance, the travelling salesman problem).
Abstract: Problem Solving with no discrete answers
This is an example of weighing the pros and the cons, but what are the criteria? Do you look deeper than the fact that they are all people and are deserve to live? (theoretically
Reverse Turing test
Neat idea. Still need natural language capabilities and ability to adapt as well as creativity. Should the computer be told what kind of questions to ask?
Some of what you are suggesting is good because it tackles "real-world"/practical problems. Solving these first would help lead to the larger goal of an more "intelligent" program or environment.
I would love to have a nice AI helper program for debugging, myself
Woz
Re:what's the point? (Score:1)
They already have made robots that sweep/mop/vaccum/whatever floors as well as people can. The problem is that it's still cheaper to pay a person to do it.
Having a robo-slave would be cool, though. You could have it do yardwork, cook dinner, clean the house, all sorts of stuff. But I think that the advent of robots that can do "people work" will raise all sorts of ethical issues -- for instance, robots do manual labor as well as people can for longer hours and for less money, then think about all the people that could be put out of work.
We won't have to worry about that for a while, though.
Take care,
Steve
I Nominate Clippy! (Score:1)
Re:Already? (Score:2)
People give Eliza way too much credit. I mean, it's a fun little program and was mostly cool for a while, but it's not even close to be able to pass the Turing Test. Since Eliza replies in questions to avoid contradicting herself, the only way she could pass a Turing Test is if you were talking to a New Yorker.
-
User: Are you smart?"
Pesci: What'dya mean, "Are you smart?" You think I ain't smart? I'm just an idiot? Huh? Is that what I am? An idiot?
User: No, that's not what I meant...
Pesci: Oh, so now you're calling me a liar? I'm stupid and a liar?
User: Uhh, are you okay?
Pesci: Yeah, I'm fine. Why? You think there's something wrong with me? You think I'm crazy? Huh? I'm crazy? I'll show you fucking crazy!
User: I think this program is bugged.
Pesci: Oh, so now I'm broken? Huh? You think I wasn't programmed right? You think I'm supposed to sit around and take your shit? Huh? I'm supposed to be nice to you? Huh? I'm just a program? Is that it? Just a program to amuse you? Is that what I am? Is it?
User: No, I didn't mean it like that...no, please, don't...nooooooooo!
*BLAM*BLAM*BLAM*
Programmer: Oh, shit, Joey, you didn't need to do that...Oh, shit. I knew there was something wrong when the beta testers disappeared...oh, shit, oh, oh, oh, shit...
Re:The Turing Test is no longer a goal of AI (Score:2)
Actually it does not even encourage us to make a computer human. It encourages us to make a computer program that produces sentences that sound human. Compare the size of the cortical areas devoted to speech processing, to the total area of the brain. This is my estimate of the Turing test's relevance
The general problem with the Turing test, as with most of the rest of the classical AI genre, is that it assumes that all relevant information processing should be symbolic. More likely only a small fraction of the information processing ought to be symbolic, the rest sub symbolic (ANN-s, fuzzy logic etc).
Look at ants, rabbits, dogs etc -- They cannot do symbolic information processing (cannot speak) but the feat they accomplish is still pretty impressive!
On October 31, a program passes... (Score:1)
Re:The Turing Test is no longer a goal of AI (Score:1)
The benchmarks have to be there to encourage funding and some research, so some test needs to be decided as a standard.
How about RoboCup [robocup.org]?
Computer vision is a decent test, but it has to be under such tight constraints. Other senses aren't worth the time, so we are left we only a few options.
This remark leaves me clueless. What's wrong with the vision problem? If you have a computer system that can see, wouldn't that be useful?
What are those constraints? If you are referring to the great need of processing power, I'd say thats more of a challenge (if the algorithms require too much processing power, then maybe they need rethinking)!
Yes, sort of (Score:1)
Re:what's the point? (Score:1)
You're not fooling anyone you know (Score:4)
Re:Turing test? Nahh... (Score:1)
Re:Turing test? Nahh... (Score:2)
A couple of weeks after I fell for the door, I found a copy of it and proceeded to check it out. It's actually quite interesting.
First of all, typos were simply repeated characters. This worked well in practice once or twice, but it gets obvious after a couple more times. A more convincing typo mechanism would be to sometimes type the wrong letter (i.e. lwtter) or double-striking (lewtter) keys directly adjacent to the intended key.
The way the actual AI engine worked was that it parsed the text the user inputted and then compared it to a list of keywords in a data file. When the first match is found, one of the responses are selected and used. Used responses seem to be logged, and if the computer runs out of responses for a particular keyword then the door aborts. The same would happen if the user inputted a blank line twice. There was a "*" keyword at the end for words not covered, and responses would include phrases like "Would you repeat what you said?" and such.
Overall, it didn't work too badly. During Christmas season one year, the author hacked the door to create a "Chat with Santa" door. One of the highlights of that one was when you said 'shit', the AI would say "If shit is what you want for Christmas, shit is what you'll get."
:)
Ahh, the memories......
-Ed
Re:I think slashdot ate my comment... (Score:1)
But it's a funny site. I recommend it!
Re:Turing test? Nahh... (Score:1)
The nice thing about it was that it had a certain
context in which it operated -- the reason a sysop
would run something like that was from being tired
of the same exact questions 60 times a day -- so it was fairly easy to seed with keywords and tailor the responses a bit to add to the illusion.
Of course the limited context makes it a bit more
like a magic trick (think "card force") than AI.
I'm afraid in the Turing test, the person with
the computer program is not the one choosing the questions.
M=(current_state, current_symbol, new_state, new_symbol, left/right)
Re:what's the point? (Score:1)
(President, Skynet Historical Recreationist Society)
Re:I think slashdot ate my comment... (Score:1)
I've never seen something more funny in my life than this [forum2000.org]. I literally fell out of my chair.
Re:Blade Runner (semi offtopic, I know) (Score:1)
Applicable blade runner quotes [geocities.com]
Computer conversations can be funny (Score:1)
i win. :) (Score:1)
char question[2048];
scanf("%s",question);
printf("I honestly don't know...\n");
return 0;
}
i'll take cash, please.
Re:I think slashdot ate my comment... (Score:1)
All I can say is, if it is AI, AI are better comedians than any human could possibly be. If human, the people behind this are the funniest people alive.
Even reading poll comments on slashdot isnt this funny.
Re:The Turing Test is no longer a goal of AI (Score:1)
What you describe is the viewpoint of those who have given up on developing a general intelligence because the problem has proved to be too difficult. "Solving problems using various techniques in order to make programs useful" isn't AI. It's standard algorithm development. It's AI in its most limited sense, where highly specific and limited intelligence is applied to specific problems.
AI more generally is about developing an artificial intelligence i.e. a computer that can convince us that it's conscious. It's a higher goal than a purely utilitarian one (but will no doubt prove to be far more useful in the long run). This means that the AI has to have human characteristics, and a test along the lines of the Turing test is the only way to measure this.
Re:The Turing Test is no longer a goal of AI (Score:1)
Re:what's the point? (Score:1)
Douglas Hofstader argued that for a computer to be able to pass the turing test, it would need to have such a detailed understanding of the world in which the language is based that it would in fact be intelligent.
I passed the turing test already! (Score:2)
I did this with AOL Instant Messenger. I saved a bunch of my gaim conversations and then read over them and customized Eliza to make it sound as much like me as I could. Then some perl magic to make it work with Toc and I left for a party and then a movie.
I got back at around 2:30 in the morning and saw a friend talking to it. He had been chatting since 11:00 pm!!! He didn't even dimly suspect that it might be a computer, but he was getting pretty pissed off - it was saying pretty stupid stuff that usually didn't make sense, and it repeated itself every 5 or 10 minutes.
I laughed over that one for a looooooong time. It might not work anymore, tho... anyone know if Aol pulled the plug on Toc?
--
grappler
Re:The Turing Test is no longer a goal of AI (Score:1)
The problem is, that computers don't see. They look, but they don't really see. They can be told that what's in front of them is Cassandra, but they aren't good enough at being able to distinguish what most everyday items in the world are just by sight.
Beauty is in the AI of the beholder... (Score:2)
One particular episode that comes to mind is The Saga of Roter Hutmann [nothingisreal.com] , available at http://www.nothingisreal.com/saga/. This is the story of a computer science major who spent hours every day talking with Julia, a Turing test program, even going so far as to ask it out on a date, before he finally voiced to me his suspicions that she was "not human". Ironically, he then proceeded to call her a poorly-written program... Julia used to be accessible via telnet (fuzine.mt.cs.cm.edu, user "julia") but, alas, is there no more...
Anyway, check out the Saga if you've got a few minutes to spare as people keep telling me it's the funniest thing they've read for a long time...
Regards,
Re:The Turing Test -- real experience. (Score:1)
We were discussing you --not me.
Re:I hope I can pass... (Score:1)
I participated in a turing test once, as the human on the other side of a terminal.
More than half of judges failed me (ie: thought I was an AI).
Half of me felt that that was so cool, but the other half started wondering if I've been playing with computers to much... The first half musta won, cause I haven't cut back one bit
Re:The First Programmer... (Score:1)
No.
Tom
Re:Yes, sort of (Score:1)
Nothing has even come close to passing it yet.
Tom
I'm a programmer and I'm okay! (Score:1)
I'm a programmer. I can mostly keep up a conversation for five minutes. Where do I apply to get the money?
Re:The Turing Test is no longer a goal of AI (Score:1)
(I hadn't spotted the 5 minute rule, is it Turing's or a bolt-on?)
Tom
Re:Hmmm... (Score:1)
Tom
The beauty of the Turing test... (Score:2)
This isn't easy at all -- imagine asking a computer program to not only suggest a move in a chess game, but to write a poem about a subject of your choosing, compare and contrast two public figures, and so on.
I don't think any of this can be done without a *deep* understanding of language and human culture.
Of course there are *many* very useful things for AI to achieve which fall short of passing the Turing test -- in fact I think by the time we can pass the Turing test we'll probably have achieved everything else -- except super-human intelligence, but perhaps that's just a matter of cranking up the clock speed
Re:Blade Runner (semi offtopic, I know) (Score:1)
I WIN!!! (Score:1)
It's a thought experiment (Score:1)
The Turing test does not produce false negative. It states that IF a computer passes it THEN it is conscious. The implication is not reversible.
Despite many researchers devoting their time to actually building machines to pass a constrained version of the test, I would say that the main merit of it is exactly that it is very hard. Constrained Turing tests, such as computers that can talk about a certain subject, only produce clever programming gimmicks that do not scale.
However, the complexity that is inevitably needed to actually produce intelligent speech is the key feature here: from complex interactions of simple components intelligence emerges. Both Daniel Dennet and Douglas Hofstadter have written some insightful stuff about this. In "Consciousness Explained" Dennet describes a conversation between a Turing-test-proof computer and an interrogator: the computer tells the interrogator a joke and explains it. It also comments that it doesn't really like the joke because it is about racial prejudice. Reading this conversation makes you realize how immensely difficult this task is.
In short, I don't agree that passing the Turing Test is no longer a goal of AI. Any system that would pass the real-deal test should be considered intelligent. However most programs written today are just gimmicks, that can only pass very short or very constrained tests. We are very, very far away from passing the real test.
Re:I think slashdot ate my comment... (Score:1)
This has got to be real people.
Re:I think slashdot ate my comment... (Score:1)
Remember for it to be a proper turing test.. (Score:1)
Person A does the question asking to the computer program and Person B. It's person A's responsibility to guess which ones if the machine and which is the human. The computer program wins only if person A recognises it as being human over the real human.
Re:The Turing Test is no longer a goal of AI (Score:3)
That's not the goal of AI, that's the goal of programming.
There was a time when expert sytems were AI. Before that, anything DWIMish was AI. Now it's all just programming.
Why? Because the only definition that consistently fits AI is clever stuff we don't really know how to program yet.
Someone (I forget who, maybe Dave Touretzsky?) said once, ``AI is like a magic trick. The first time you see it, you think wow, that's amazing! That must be magic! Then you want to know how it's done, and someone tells you, and you think, wow, that's really clever! Then later, after you understand it and it's no longer novel, when you see it again you think, well it's just slight-of-hand, duh.''
Once something doesn't feel like magic any more, because it has become common-place, it is no longer AI. At that point, it's just programming.
(And some pointy-headed loser will probably even refer to it as a ``design pattern.'')
I like that Hofstadter dialogue you mentioned, but calling something that passes the Turing test a ruse kind of misses Hofstadter's point entirely. Just because you understand why a program passes the Turing test doesn't mean it didn't pass, and it doesn't mean you are excused from treating the program as a human. Because if you hypothetically understood all the chemical and electrical processes that made us work, it wouldn't excuse you from treating your fellow humans decently. ``Oh, it's just a meat-machine pretending to be clever'' isn't an excuse.
To bastardize the Arthur C. Clarke quote, any sufficiently understood magic is indistinguishable from technology.
Re:The Turing Test is no longer a goal of AI (Score:1)
The problem with the Turing Test is that it tries to make a computer human and that's not really what AI is all about - it's more about trying to solve problems using various techniques in order to make programs useful.
Is that what they tell you these days ? How is that distinct from any other kind of programming ? The real fact of the matter is that AI is based on an incorrect (but intuitively very attractive) idea that humans minds and computers are similar. Since its become increasingly obvious that this is not the case, the people who staff the AI departments of universities have backed off further and further from this. By the time I was there, they had realised the brain's "hardware" was radically different from a computers and had backed off into a kind of dualism based on the C-T hypothesis, claiming that conciousness (as if we had any idea what that is) is a program that can run on widely differing hardware platforms. Obviously they've backed off even further now.
Re:I think slashdot ate my comment... (Score:2)
The Turing test doesn't prove intelligence (Score:2)
Although the Turing test is widely regarded as a tool to test intelligence, this statement is very questionable. The Turing test only test how well a program can simulate a human.
For example: if a friend of yours can multiply two large numbers, you'll say he's smart; however no-one ever called a calculator 'smart'.
If a person memorizes all countries with their capitals, you'll also consider this person intelligent; computers are far better in memorizing things.
The point is that this program (trying to pass the Turing test) will not only have to fake intelligence, but also stupidity. If the interrogator asks it to factor 4553536663, it will have to lie and say it doesn't know or it will loose credibility. The question here is: is it favorable for a computer (or any other device) to deny its capabilities, just because our definition of intelligence might be a little off ?
Re:The Turing Test is no longer a goal of AI (Score:1)
No, exactly. And to make a computer really see, is what's known as the vision problem. My question (as you will see if you read it again) was: Why isn't this a suitable problem for AI?
Re:The Turing Test -- real experience. (Score:1)
You'd probably fall for it.
;-)
But to win the Loebner Prize... (Score:2)
They have this contest every year. Some years, the contestants do well, others, not so well. When taken as an abstract, i.e. "A computer that can always fool any human for any length of time into thinking that he is talking to another human", the Turing Test is valid -- but untestable. Once you put constraints on it ("these 10 people for 15 minutes...") it's no longer valid because each constraint is a weakness (maybe the people were stupid. Maybe if they just had time to ask another question they would have been able to tell the difference...)
I think something important that's forgotten frequently in dealing with natural language technology is that right now, in almost all cases, you don't want to have a conversation with your computer! You want to tell it to turn on the lights, and to ask it how much money you have in the bank, and to find cool new warez and MP3s, dood. The sentence structure for queries and commands is far different (and far similar) than trying to parse out conversations in which context almost always becomes the downfall of comprehension.
Someday, yes, people will want to have a conversation with the machines that control their houses. I envision a machine that can tell by my sentence structure what mood I'm in, and put on some appropriate music, set the lights, and so on. But those things will all happen *after* we get the basics down, like differentiating "Lights on" from "Could you turn on the lights please, computer?" and having them both do the same thing. Nobody would call the former true natural language. It's when we can do the latter, and have "noise suppression" be so seamless that you can say what you mean in almost any conceivable way, that people will take it seriously as an interface.
Re:I think we're close (Score:1)
(Bad influences and all that)
--
Re:The Turing Test is no longer a goal of AI (Score:1)
The thing is, AI really isn't that different from regular programming. It's just that, because it plays a game or solves puzzles instead of replacing characters, we call it "intelligence". AI is more about trying to find a way to program those little tricks and shortcuts that we take in our own mind.
What really constitutes the "intelligence" we are trying to make artificially? We have robots programmed to react to sensory input. So the fact that it reacts makes it...? Would an intelligent program be one that helped me find problems while I'm debugging?
Many AI problems relate to one common trait: How do I eliminate a lot of the useless paths I could follow to achieve this goal?
Damn, I have to go to class. Anyway, these are things I am thinking about....
Woz
turing test comic strip (Score:1)
turing test comic here [smallgrey.com]
Define Intelligence! (Score:2)
One claim is that the Turing test is the only way we got to determine if a computer program is intelligent or not. This is derived out of the notion that we think we can recognize intelligence when we see it. But the test says nothing of common error probability (many humans have actually failed the test for being an AI), and the capabilities of the judges. If you read some of the transcripts from the past official Turing tests you'll be horrified how quick some judges are to judge, and what simple questions they ask. Many of them appear to be bored with it all. This also applies to the human candidates. Some of these faults in the past can be blamed on poorly written programs, that couldn't compete in any way. The past Turing tests actually had limited discussion topics, so that the programs could be programmed for a specific discussion topic. But think of a super-program (that is not super by today's standards) among those. It could actually pass in the tired and disappointed athmosphere four years ago. To quote from "Tomas Covenant The Unbeliever": Any test is just as good as the tester himself.
About Humans. In our arrogance we say that we are intelligent, and everything else is not. We are amazed and dazzled by pets who performs instant rescue operations in fires and drowning accidents. For how can animals be intelligent? We don't measure intelligence, we blatantly state that things around us that ain't human is not intelligent. By unconciously applying our own version of the Turing test to everything around us. Of course, many of us do regard animals as intelligent, to a lesser degree, but most humans think of intelligence as a binary state.
About Intelligence. But it can be measured. It's not an ON/OFF switch for us to decide it's state. Heck, we don't really have a clear-cut definition of intelligence even today! Other than that faulty "It's not human-like" negativity test, and IQ tests which is only a test to separate "dumb" people from the rest.
And there isn't just One Kind of Intelligence (to Rule them all). You have social-, technical-, langual-, mathematical-, logical-, motoric-, coordinatic- and many, many more intelligences. There exists no test that tests it all, and no tests are very accurate. Many people who are considered "dumb" really excell (how the hell is this bloody word spelled?
My definition of an intelligent system, is an open-minded and positive test. Wether I can measure it or not a system is intelligent to a certain degree if it contains information and processes this information within itself. It MAY receive input data, and it MAY emit output data, but that is only essential to my perspective of knowledge (not beliefs). The type of data-storage medium is not essential. Neither is the medium processing the data. The essential is that information is being altered inside the system, and fed back in a feed-back loop. Thus, the system has a way of "viewing itself" (definition of a reflective system).
The internal processes can involve operations like copy, addition, inverse, etc. These would be atomic functions. While multiplications, subtraction, divisions and exchanges would only be optimizations, since they always could be expressed by a set of atomic operations. But the data doesn't have to be numbers, and the atomic functions would be different for neural networks, images, symbols or even colours for instance.
To complicate things even more, processes could run in parallell internally in the system. In real life, nevral networks in our brains all process in parallell to a certain degree. (Ie. I'm sure there are semi-synchronisation methods between parts of the brain, even though they might be complex or chaotic)
In information theory, you can express any information in binary numbers (00101011). This simplifies things, but you'll need a non-ambigous specification to convert data both ways. Some types of data could perhaps be more effectively processed than strings of binary data (ie. linked-lists, images, chinese symbols), simplified in complex structures of binary strings.
Input and output data in a feedback-loop would permit the system to develop with its surroundings. To what extent is unknown. Ie, how much intelligence and knowledge would the two systems contribute to each other? Limitations would be imposed by information storage sizes, lack of atomic functions, dead-end loops, etc. Especially lack of creativity (a random function) would be a dramatic limitation to the extent of intelligence and knowledge possible to be learned and taught. Read-only areas in the system's data or process-storage would be another severe limitation.
Systems lacking a trait that exists in another system could interface with that other system in a symbiosis, to use the resources found there. This is in the extreme case the basic principle of an artificial neural network. Where everything is shared holographically in the structure of the neurons' connections (and each connections weights).
On the difference between intelligence, knowledge and their respective levels. The usual pit-trap is to not distinguish intelligence and knowledge. I prefer to define level of knowledge as the amount of non-redundant information a system can internally access within a given time/number of cycles. While level of intelligence to be complexity of a given task to be solved within a given time/number of cycles.
These levels are next to impossible to measure very accurately in real life, but of course you have imperfect methods. Just not count on them for anything else than what they are. One type of method is to measure intelligence from the output of the system, in light of the input data or not. You can also test intelligence by scanning the actual code and data the system consists of, if you are able to "X-ray" it. You will have to be able to determine how intelligent the algorithm is. Of course, in real life, the observation will always affect the state of a running system (Real life is ALWAYS On, darn
These definitions leaves one thing hanging if you're calculating in real-time: processing cycles per time unit (e.g. 450 MHz). I don't consider a system processing large amounts of data (a supercomputer) to be more intelligent, by the definition above and "common" reason. But you would have to multiply this speed with the intelligence level to get the total intelligence-effect (ie some of what Turing and IQ tests are really testing).
I know this is all hard and difficult to understand and think over. The definition is very impractical too. But it's a much better place to start, than just saying "I don't see the intelligence in this" when you haven't even decided for yourself what intelligence really is! That simply shows alot of ignorance. Besides it's the modern way to go. Most AI programmers building neural network live by it. (Sadly I'm not
The definition doesn't exclude anything physical the right to be intelligent. We human beings consists of thrillions of living cells. They in turn consist of billions of atoms and molecules. Which again turns out to consist of even smaller "particles" of less physical nature (see the religion of modern science [not a book, it's for real!
I think this ALSO applies in cases where we are not able to detect the output data or the non-human intelligence in it. Science is too eager to test for negativity and simplify things, thus many creative theories are crushed by the latest dogmas. (Scientific people think they know better than everybody else just because they use fancy language to make themselves misunderstood.)
Now if you've grasped the ideas I've expressed here, you'll know that the Turing test is a bogus test. Both in the computer lab as well as in real life.
- Steeltoe (really tired of hearing those people say Turing test is all we got)
PS: Gee, this edit-window is tiny!
Re:i win. :) (Score:1)
$2000 For Best of Year (Score:1)
First, Hugh Loebner is the one that is supplying the $100,000 for the Grand Prize not Dartmouth. Dartmouth is just hosting the contest this year. /. box.
The link to Hugh's Loebner Prize page has already been posted in one of the other comments, but should be added to the list of related links in the
Secondly, even if you don't win the $100,000 Grand Prize, Hugh presents $2000 every year to the "most human program". Entries are being accepted until Oct. 31st and there is no entrance fee. So go read the rules and try to win yourself a few thousand dollars.
I hope no one passes (Score:1)
Re:$2000 For Best of Year (Score:1)
That's a relief!
Re:voigt kamf (Score:2)
This is an important distinction. The replicants were already Turing compliant, but they were not human. Dick believed that empathy was the defining aspect of being human. Dick's replicants would have been able to pass any Turing test with ease. In fact, they passed the most difficult Turing test of all: they were able to live in human society, hold jobs, sing Opera, make love; but they weren't human.
How to beat the Turing Test (Score:1)
IE: "Hey, wussup, just wonderin if ya caught that NIN "pinion" vid on MTV yet? If not, check dat shit out cuz its PHAT!!"
I'd like to see what an intelligent program's response to that would be...
siskel (Score:1)
Re:I think slashdot ate my comment... (Score:1)
I dont think it can be done (Score:1)
Artificial intelligence's purpose isnt to mimick the way humans think and react but to be able to devise solutions to problems without the need of specific programming for the problem, to be able to learn and adapt to new situations, and not just be constrained by a single original procedure. This test to an extent might measure this, as the computer would be able to answer a question accurately no matter what the form the question is presented as, and if it does not have the answer on hand, search the internet for the answer, however, the answer would still be very easily distinguishable from a human's answer.
There are a number of ways in which one could "trick" the computer, or "cheat" on t he test from either the interrogaters end, or the person whom is being compared to the computer. One easy way to cheat would be to simply look for human error. A computer has no element of human error, except that which is programmed into it. An instant giveaway might be a typo ("teh"). Another giveaway would be when the person does not know the answer to the question, as any artificial intelligence computer program made to be able to accurately answer questions would be able to quickly locate and produce the correct answer, while no person knows everything. You ask someone "whats the atomic mass of bromine" and they would be like "what the hell kind of question is that?", while a computer spits out the number. Which brings me to how the answerer could cheat: slang and dialect. People generally dont speak proper english, and it would be easy to distinguish between a computer program and certain dialects or slang used by people. Of course, you could attempt to tackle this and the typo problem both by making it purposefully make typos, or attempt to make it speak with slang ("gangsta_turing_AI": 'you best step off 'for I bust a cap in you a$$ motha*****') but seriously, is this what AI is about? I didnt think so, and even if we went to such lengths, I don't believe it could be done 100% convincingly, at least not by the end of the month
Bah! (Score:1)
It sounds really cool, but . . . (Score:1)
Turing Test--what robots can and cannot do (Score:1)
I'm not going to reiterate all of it, but Selmer Bringsjord [rpi.edu] has written and collected a lot of interesting information about robots, Turing Tests, and the state of the art.
Hehe (Score:2)
It says more about the friend. It took a LOT of customization to get Eliza to do that. Nothing spiffy, just a lot of words for it to watch for and a variety of responses. When I was done with it, it didn't sound anything like the original psychiatrist version.
This particular friend was actually an annoying guy from my CS class who got my AIM name from somebody and kept bothering me. Instead of putting him on my blocklist, I gave him the Turing Treatment (TM)
Now he's been bothering me more - he's fascinated by my customized Eliza and he thinks I am really on to something big in the field of AI. Sheesh...
--
grappler
no...not yet, im afraid (Score:1)