Human vs Computer Intelligence 421
DrLudicrous writes "The NYTimes is running an article regarding tests devised to differentiate from human and computer intelligence. One example are captchas, which can consists of a picture of words, angled and superimposed. A human will be able to read past the superposition, while a computer will not, and thus fails the test. It also goes a bit into some of Turing's predictions of what computers would be like by the year 2000."
Non-issue. (Score:3, Funny)
Anyone that has seen Star Trek:TNG knows that Data is a pretty smart fella.
Re:Non-issue. (Score:5, Funny)
Re:Non-issue. (Score:5, Funny)
He also got laid, unlike 97% of the slashdot population.
Re: The Chineese Room (Score:5, Interesting)
the question of whether computers use intelligence the same way as humans use intelligence has long been determined through the 'chineese room'.
the point of John Searle's Chinese Room being is to see if 'understanding' is involved in the process of computation. if you can 'process' the symbols of the cards without understanding them (since you're using a wordbook and a programme to do it) - by putting yourself in the place of the computer, you yourself can ask yourself if you required understanding to do it:
Minds Brains and Programmes (The Original Chineese Room): [bbsonline.org]
http://www.bbsonline.org/documents/a/00/00/04/8
the complementary question - 'is the human brain
a digital computer' is answered by the same author:
Is the Human Brain a Digital Computer (John Searle): [soton.ac.uk]
http://www.ecs.soton.ac.uk/~harnad/Papers/Py104
Summary of the Argument:
1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.
2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.
3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.
4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"
5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.
6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.
7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.
8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.
--
best regards,
john [earthlink.net]
Re:Non-issue. (Score:4, Informative)
That depends on whether you count level 1 literacy (that's roughly equivalent to being able to recognize street signs) as being able to read.
Re:Non-issue. (Score:3, Interesting)
That depends on how you define "read". Maybe 97% or more of Americans
can read at a basic level, but quite a few of them get lost if you
start using words that are moderately unusual, words with more than
about two syllables, or sentences with more than two clauses, or if
you require a reading speed that approaches the speed at which people
normally talk. I could easily believe 23% can't read in a natural and
easy fashion or read more advanced stuff. I'd be guessing at the
figure, but that sounds pretty close to me. It's worse in some areas
than others, of course. Galion is probably about 20%. The inner
cities tend to be worse.
Also, the percentage who can write coherently is way lower than the
percentage who can read; I would hesitate to call anywhere near 97%
of the population literate if the ability to construct a sentence
and put it to paper is part of the expectation.
Of course, computers write even worse than they read. (If they're
making it up as they go, that is. If they have prefab stuff they
can do pretty well, but that's different.)
Re:Non-issue. (Score:3, Interesting)
DennyK
I failed! (Score:4, Interesting)
I did the gimpy [captcha.net] test.
Results It switched pictures on me! Honest!!
Re:I failed! (Score:2, Funny)
Re:I failed! (Score:2, Insightful)
Re:I failed! (Score:2)
Good test (Score:2)
Better test (Score:2)
Difference = Taunting (Score:5, Funny)
Re:Difference = Taunting (Score:3, Funny)
Smug Mode
Re:Windows, anyone? (Score:4, Funny)
Abort, Fail, Retry? I know how to fix it, but do you human?
Re:Windows, anyone? (Score:3, Funny)
Back during high school, I wrote dozens of .bat files called "what" or "how" or "go" and so forth, and I basically had them parse themselves so they could keep up a semi-decent conversation. Kind-of like a shell-based Alice. (Well, if you knew what to say, since if you didn't put a recongized word first on the line, it would just say "Bad command or file name.")
My favorite was when I came back from a two-year stay in Brazil, and my friend and I were at the computer. We had both totally forgotten about those little batch files, and his playing with the computer went something like this:
C:\>dri
Bad command or file name
C:\>Huh?
Bad command or file name
C:\>What was that?
Bad command or file name
C:\>Could you repeat that please?
Bad command or file name
C:\>Thank you.
You're mighty welcome, sir!!
In case its slashdotted: (Score:4, Funny)
For full access to our site, please complete this simple registration form.
As a member, you'll enjoy:
In-depth coverage and analysis of news events from The New York Times FREE
Up-to-the-minute breaking news and developing stories FREE
Exclusive Web-only features, classifieds, tools, multimedia and much, much more FREE
Is this a joke? (Score:4, Informative)
Computers are not good at complex pattern recognition. Wow.
For the record, computers can recognize words like this, just not very easily. With a big enough dictionary and a lot of patience, you'd be suprised at what they can do. While still an undergrad I was able to write a rather simple program that would recognize images of the cardinal numerals, even if they were highly mangled, and worked with a grad student in building something that could pick out certain features of a rotated image and by comaring with some sample features, rotate the image correctly.
Re:Is this a joke? (Score:2)
The semicoherent and coherent articles that talk about the capabilities of algorithms rather than the machines they run on are all in the research journals.
Re:Is this a joke? (Score:2)
Neither are you or anyone else.
Re:Is this a joke? (Score:5, Interesting)
The counter-argument is that formal systems (such as modern computers) have logical limitations that are not evident in human cognition. Therefore, machines must either make the same leap in complexity such that their actual thought processes can no longer be mapped directly to the underlying formal system, or else remain forever inferior to natural intelligences.
It's also interesting to wonder if Nietzche knew about (or even could have known about) the discovery that nothing is deterministic at the subatomic level. Would he have persisted in his belief that intelligence was deterministic, or would he have theorized that it was probabilistic?
Re:Is this a joke? (Score:2)
Human intelligence (Score:5, Interesting)
We are never going to have a machine that is truly "human". Let me explain.
That doesn't mean we won't have intelligent machines that can do just about anything intellectually that a human can do. A human being is more than just a smart computer. Our behavior is governed not only by the higher logic of our brain, but also by millions of years of bizarre -- often obsolete -- instincts. If you yanked a brain out of a body and hooked it to a computer, it would no longer be truly human because of the lack of hormonal responses that come from every part of the body.
It's simply going to be too hard/impractical and, frankly, useless to make an intelligent machine that mimicked every hormonal reaction and instinctual mechanism.
We will have intelligent machines, but we will never have human machines.
And why bother (Score:4, Insightful)
It's the differences between computers and humans that make computers so damn useful. Tell a human to add up a list of 200 numbers and he'll likely take a long time, and get the wrong answer because humans suck at repetative boring tasks beyond the limit of their attention spans.
Re:And why bother (Score:3, Funny)
Re:Human intelligence (Score:5, Funny)
RTFA... that applies to moderators too.
Re:Human intelligence (Score:5, Interesting)
No computer will have hormones, or millions of years of evolution, or bad hair days, or dendrites, or lots of things we have. But that's all beneath the surface, as it were. Turing's point is that whatever intelligence is beneath the surface, ultimately all we see if the phenomena of intelligence, its outward manifestations. If I decide whether or not you are an intelligent human (as opposed to a computer or a coffee table or a CD playing your voice), I don't see the gears turning inside your head, or really care if you've got actual gears or not. I just interact with you, and get an impression.
The idea here is that to pass Turing's test, you create a machine with the outward appearance of all of those things, by abstracting the phenomena from the underlying causes.
What your argument gets closer to is a slightly different point. Why would we want to create a computer that is indistinguishable from people? People make mistakes in their addition. People lie. People get depression and schizophrenia. People can be bastards. People don't want you to turn them off, and will fight like hell to stop you from doing it. If we really create an accurate simulation of human intelligence, one that acts like a person with neurons and hormones and everything else, you get all this baggage with it.
I'd really like intelligent agents to search the web for me, to remind me about things I didn't tell them to remind me about, whatever. But I don't see the practical need to create a Turing testable machine, unless it is really an interim step by the AI gurus to get to the programs I want. Now, there may be a theoretical need, a human drive to create Turning's definition of AI because the gauntlet has been thrown down, but that's a different animal, ironically enough.
Re:Human intelligence (Score:3, Interesting)
Re:Human intelligence (Score:4, Interesting)
Even if it turned out that we were able to produce what we'd now count as a "human machine," I think that we would then deny that it was human. That is, I suspect that it's a conceptual claim that there will never be any such thing as a human machine.
No matter how human or intelligent a machine is, it'll never count as human (or even fully possessed of human intelligence, whatever that is), since the bar will be raised. (Consider that at one point, people thought the hallmark of being human was being rational and that the characteristic activity of rational beings was doing math...)
When we've got a machine that passes all of the existing tests, someone'll ask "but why doesn't it cry during 'Sleepless in Seattle'?" or "why doesn't it hate Jar Jar?" or "does it get easily embarassed?"
Sort of (Score:3, Interesting)
Saying "foo cannot be done" frequently results in someone being utterly wrong. Just a few decades ago, the idea of atomic power would have been laughable -- the ability to wipe an *entire city* away? How about having a person walk around on the moon? Unthinkable.
So, at the moment it seems to be an insurmountably difficult problem. But, a few years ago, the same thing would have been said about problems that we're not starting to think about being doable via quantum computers -- the face of computer science literally changed.
Re:Human intelligence (Score:2)
Funny. Similar things were said about creating GUIs.
-Bill
Re:Human intelligence (Score:3, Insightful)
Please elaborate on what you mean by instinct. How does this differ from any other algorithm? Certainly it was created by evolutionary processes, but we can also conceive of an algorithm where the algorithm itself is compartmentalized and acteded upon by a Genetic Algorithm, thus simulating evolution. We may not expect the resulting algorithm to be very usefull due to the complexity/nuances of selection, yet it should certainly do something.
If you yanked a brain out of a body and hooked it to a computer, it would no longer be truly human because of the lack of hormonal responses that come from every part of the body.
A couple of points:
1) What is human? You have not defined what it is to be human, therefore, it becomes impossible to say unequivocally what it is NOT to be human. 2) Hormonal responses can be looked at in a variety of ways: 1) Such responses, in fact, are simply another stimulus. We would expect any intelligent machine to react differently under a different set of stimuli.
2) The endocrine system also comprises the machine that is "human intelligence" and by removing a part of the machine, we, in effect, cripple it.
As a final point, we are not interested in Human machines per se. Simply machine that are human-like, primarily intelligent in a manner that we may communicate with them and share a semblance of understanding.
I feel a disturbance in the force... (Score:2, Funny)
I'd like to see AI figure THAT one out! I call it Automatic Slashdot Slowdown Effect Detection, or ASSED for short.
Coolest Job Title: C.A.O. (Score:5, Funny)
Chief Algorithms Officer!!! I don't know about the rest of you nerds, but I'd sell my last Keuffel & Esser [att.com] to get a crack at a job like that.
Even better title (Score:3, Informative)
Visit the homepage of the and scroll down or search for the entry for Eric Jacobsen. Proof that not everybody at Intel is a soulless corporate whore.
The New Turing Test? (Score:5, Funny)
Here's a test (Score:4, Funny)
First Man [falls over]: "AAAAAHH!"
Me: "Human."
[Kicks second man in balls]
Second Man [falls over]: "Gffffff-!"
Me: "Human."
[Kicks third man in balls]
Third Man [falls over]: "..."
Me: "He's the robot! Get 'im!!!"
Couldn't a computer do the name, address parts (Score:2)
Computers are good for repetitive tasks, middle school kids are easily bribed.
Accessibility issues? (Score:3, Insightful)
Or alternatively, are they perhaps working on, say, a audio version? Wonder how would that work.
Re:Accessibility issues? (Score:4, Interesting)
I suppose it could generate a spoken list of words in a sound file that is linked to from the image. The alt tag could then read "Please click to listen to a series of words. Enter the words to signify you are a human, not a register bot."
Braille terminals (Score:3, Interesting)
I suppose it could generate a spoken list of words in a sound file that is linked to from the image.
The CAPTCHA web site has such a test, but of the sites that use image-based bot tests, only PayPal offers an audio alternative.
Another problem is that sites often present the tests in proprietary formats with expensive implementation royalties, such as .gif and .mp3.
But even providing both the image in a free image format (.png) and the audio in a free audio format (.ogg) won't help blind users behind a Braille terminal without a speaker, such as blind-deaf users.
Re:Braille terminals (Score:2)
Re:Braille terminals (Score:2, Insightful)
I guess blind-deaf users need day-to-day-help anyway
So what about Braille terminal users who aren't also deaf? Should Section 508 compliance (required for USA government web sites) allow a web site to require all blind users to have sound cards? /p)
Re:Accessibility issues? (Score:3, Informative)
Sounds can be thought of as a sound version of Gimpy. The program picks a word or a sequence of numbers at random, renders the word or the numbers into a sound clip and distorts the clip. It then presents the distorted sound clip to its user and asks the user to type in the contents of the sound clip.
This would probably be similar to the visual techniques, most likely employing some audio filters so its hard for a computer to decipher (our ears are pretty sensitive in deciphering noise from actual voices/useful sounds, so it shouldn't be a problem for us)
Philosophy 101 (Score:4, Informative)
Re:Philosophy 101 (Score:3, Insightful)
Too lazy. Find the links yourself.
A good step towards AI... (Score:2)
It says I'm not human (Score:4, Funny)
You entered: noses
Possible responses: nose
Result: FAIL.
Wohoo! I'm a robot! This test proves it! Vegas here I come!
Why does this test make me feel like I just had a run-in with John Ashcroft?
Re:It says I'm not human (Score:3, Funny)
You entered: televisions
Possible responses: television tv
Result: FAIL.
So next time...
You entered: bike
Possible responses: bicycle bicycles
Result: FAIL.
And again...
You entered: toothbrushes
Possible responses: toothbrush
Result: FAIL.
AAAAAAAAAARGH!!! I hate stupid word guessing programs that don't consistently account for common abbreviations and plurals!
Re:It says I'm not human (Score:3, Insightful)
You entered: toothbrushes
Possible responses: toothbrush
Result: FAIL.
AAAAAAAAAARGH!!! I hate stupid word guessing programs that don't
consistently account for common abbreviations and plurals!
Ahh, delightful irony. That would be the point, then, wouldn't it?
In other words, you have to be smarter than the tools you use, so it's pretty stupid to put a computer that is *not* intelligent in charge of deciding the intelligence of others.
Another Area Not Talked About Much - Vicarious Exp (Score:4, Interesting)
For example, the computer's tactile interface has to touch the oven and say 110 deg C, as opposed to taking as fact "I heard a human mention that Unit 5 already did that and it was 110 deg C, so I accept it as fact that it is 110 deg C".
I know I'll get modded down for this, but I wonder what the limits of questioning the computer / human participants was? (Article said they quized participants to see if they could tell who was human and who was a machine). Like, could they ask "What number am I thinking of?" The machine would blank out and the human would stupidly blurt out "69 dude!"
Wanna bet? (Score:5, Informative)
Think Cash (Score:2)
While I mention some ways to achieve this, I thought more about the problem and the qualities a solution would need, than the solution itself.
If interested, more can be found here [half-empty.org].
they found me out (Score:3, Funny)
acid/head
acid/head
acid/great
acid/angry
b
In the year 2000... (Score:2)
African or European? (Score:3, Informative)
Really, comparing human intelligence to computer intelligence doesn't seem like a good idea unless we're going to define what kind of computer intelligence it is.
Neural computing really screws the comparison up - the kinds of computing that normal computers are good for are quite different from the kinds of computing that neural nets are well suited to. Furthermore, different neural net architectures make for different capabilities - the tasks a feedforward network are best suited to are very different from the tasks a bayesian network are best suited to.
Take a look at this page [aist.go.jp] for a good run-though of the different kinds of nets.
Re:African or European? (Score:3, Funny)
Which, by the way, gives me a great idea. I'm going to adapt that annoying psychoanalyst algorithm to create Slashdot accounts and randomly respond to posts in high volume. Not only will it be fun for all ages, but it will actually increase the infamous Signal to Noise ratio for Slashdot!
Re:Expensive? Experimental? (Score:2)
A much more accurate test... (Score:5, Funny)
Re:A much more accurate test... (Score:2)
Can't really mimic human intelligence (Score:3, Insightful)
The major roadblock is that a computer can only respond in ways that it has been programmed to do so. While you can code incredibly complex AI algorithms and simulate an incredibly complex level of intelligence, the fact remains that a computer invariably operates along rigid pathways.
It can be argued that human thought is nothing more than a complex series of chemical reactions, but there is far less rigid logic involved in human thinking. Indeed, we're still not entirely sure just HOW we think.
Never say never, but I don't think we'll be seeing a truly human AI before any of us is dead.
The /. AI test (Score:3, Funny)
1. Make a "first post" posting 15 minutes after the article goes up.
2. Be the fourth person to enter a "In Soviet Russia
3. Be labeled a karma whore.
4. Whine about the masiv tipe ohs in artaculs.
5. Hate M$, Sony, MPAA because thats one of the three laws right?
One More Cool Item... (Score:4, Interesting)
CAPTCHAs have several applications for practical security, including (but not limited to): Cool, eh?
Re:One More Cool Item... (Score:5, Funny)
A related story was the time I saw on Boston.com that one of their editors was getting a haircut and they had posted an online poll for users to choose a style. Remembering CMU's adventures in slashdot polling, I posted to that same messageboard a plea for students to help the poor guy out.
4000 robo-votes later, he had a mohawk. Then they showed pictures of him going home for mother's day, and his dad's embarassed look. The best part was the quote from the editor at the end of the story -- "I had fun with this and I hope all those hackers out there did too."
So, see, geeks? You too can make a difference.
Maybe.. (Score:4, Interesting)
What I mean is, I don't think an intelligent being would be capable of creating something that is more intelligent than himself.
The machines need to be programmed by humans, who are limited by their own inteligence.
Can God make a rock so big that he can't carry it himself?
Re:Maybe.. (Score:5, Funny)
What I mean is, I don't think an intelligent being would be capable of creating something that is more intelligent than himself.
My dad was :).
Re:Maybe.. (Score:4, Insightful)
Or how about the example of the AI chess players, who can play vastly better than the people who programmed them?
Re:Maybe.. (Score:3, Interesting)
The first of these is the nature of hardware. Obviously, electronic hardware is much different than human hardware. Human hardware has a tendency to gradually improve between the age of 0 years to, say, 30 years, of if you subscribe to a different theory of learning, then 0 years to on average 80 years (being death). Factoring in evolution, there's some further gradual enhancement over the course of a million years. Computer hardware, on the other hand, has a tendency to improve itself at an impressive rate that depends on how much effort humans put into it. The end result being that for certain tasks, computers can vastly exceed humans. The reverse, that humans can vastly exceed computers, is also true, but as time goes on, this will probably end up being less the case. And, as anyone who's worked in teams on a technical project knows, it's difficult to make cumulative human effort scale upwards. This technological task can also be difficult with computers, but generally less so. So the point of all this being that there are simply fantastic computational levels that computers as a whole are able to achieve, to be applied to tasks of "intelligence" for better or worse in a way that humans can't compete with.
The second point has already been touched upon: humans die, computers don't. I mean, you can make the claim that computer parts fail, but the fact remains that data and algorithms are passed from one generation to the next (hopefully) unchanged. The base of innovation built for computers really just expands. Humans, on the other hand, build their own innovation, but must then spend time teaching their successive generations how to do things, and for exceptionally bright individuals, the successors may not even reach their amassed abilities. No need to launch into arguments like "but software needs to be recompiled for different platforms!", that kind of talk is counterproductive.
I suspect that anyone familiar with Linux has a certain appreciation for having complete control over what's on their system, but the fact is that increasing complexity will increasingly result in increasing layers of abstraction, to the point where everything is built upon layers that are further built upon layers. The advantages (and problems) associated with this are (painfully?) evident now, and computers are still relatively new; imagine things 50 years down the road! Once methods of software engineering are designed that lower the occurrence of bugs make things more fault-tolerant, it's just going to be commonplace, if it isn't already.
So what I'm saying is, it's an interesting academic question, but in a lot of ways the potential clearly exists for computers outpacing anything that humans can do. Not unlike a teacher can instruct a brilliant and eager student to the point where the teacher actually becomes the student.
Taco Test (Score:5, Funny)
I ask the "suspected bot" if they like tacos. If they give me an intelligent answer, they are not a bot. If they give me an answer like "Wanna see my hot pics go to http://192.168.1.112/hotbabezzzz.pl?2345" Then they are a bot.
This test also works on telemarketers in a slightly different fashion. I tell them to "STOP... I'll only buy your product if you send me a taco with it. If not, no deal." since there are big logistical problems with sending me a taco, they are thwarted every time. I'm sure this test would work equally well with any obscure food item.
Article -1 redundant (Score:2, Interesting)
It won't work... (Score:5, Insightful)
That's what Gary Kasparov was complaing about when he played against Deep Blue the first time...there was a whole team of IBM programmers modifying the code during the game to specifically counter Kasparov's playing style. It wasn't a reflection of machine intelligence, it was an example of human adaptation imposed upon Deep Blue.
Re:It won't work... (Score:4, Funny)
The computer is not at all limited. Any physical process can be computed by a Turing machine, which means by extension that any modern PC can compute anything. It is simply a question of time required to compute it. The brain is a physical system, and is thus Turing computable.
If there exists more to humanity than the physical, then computational theory does not claim that Turing machines can compute it. But the brain at least, and all of its adaptability to new situations and new problems, are computable.
For more information search for information on programming "neural networks" and "genetic algorithms".
tech econ boost? (Score:2, Insightful)
The first true AI machine might be spam catcher. Spamminator 2000!
Test is of no real use (Score:5, Interesting)
Once you devise a test system, someone can write non-AI software that can fake it and pretend to be human by knowing what it needs to for the test. Only a real human can tell human and machine intelligence apart, not a systematic test. That's why Bladerunners had to manually test the androids, instead of just letting a machine do it. Real-time human insight is key to testing machine intelligence.
Re:Test is of no real use (Score:3, Insightful)
Poll Stuffing on Slashdot (Score:5, Insightful)
This ain't intelligence (Score:5, Insightful)
In other words, can the computer detect the information in the same form that the human can? Can a human read a grocery store bar-code as easily as a computer? No. Can a human read one of those bit-boxes on the FedEx shipping label as easily as a computer? No. Can a human read the Tivo-data sent on the Discovery channel as easily as the computer? No. But none of those failures means the computer is more intelligent, just more capable of recognizing the information that is there.
Both the computer and the human can recognize "moon/parma", but intelligence comes into play when the human starts thinking of Drew Carey and humming the theme music. Intelligence is not just collecting information, it is doing something useful with that information.
Training a computer to fool Stumpy (Score:5, Insightful)
Looks like their system is hosed right now because it showed me 4 pictures of horses, 1 of a cowboy, and one of a turtle.
When it asked:
What are these pictures of?
I answered "things"
apparently it didn't like my answer.
Funny thing though, the images are being pulled by image number from the getty images database. You could write a piece of software to lookup the images at getty, pull the keyword list (that getty assigns to all photos) and cross reference the list to get the answer.
--
Then this got me thinking about the whole thing in general. My answer WAS correct. Reminds me of the Cheers episode where Cliff is on Jeopardy and answers the final Jeopardy question:
"Who are three people who have never been in my kitchen."
Not the answer they were looking for, but is it wrong?
I was being a smartass the other day while watching sesame street with my daughter. They had pictures of 4 animals and asked which one didn't belong.
kangaroo
rabbit
grasshopper
fish
they, of course, were looking for 'fish' - because the other three live on land or travel by hopping.
I popped up that the answer could be the kangaroo - because the other three are native to north america. Or it could be the grasshopper, as the only one with an exoskeleton.
My wife reminded me that it was a kid's show.
It comes down to the fact that if an strict mechanism is used to judge the answers (like a computer) it may not be able to handle legitimate answers from humans.
--
Seems both the questioner and questionee need to be intelligent to participate.
Re:Training a computer to fool Stumpy (Score:2)
Re:Training a computer to fool Stumpy (Score:2)
Edsgar Djikstra said it best... (Score:5, Insightful)
What about the impaired? (Score:5, Interesting)
This knocks out computers, which lack the intelligence/programming (so far) to differentiate between conflicting objects to make out a letter/numbers.
It also may knock out humans with vision problems though, especially those with colour-vision issues.For those with hearing problems, the sound test isn't good either.
It seems that right now, computers trying to translate these puzzles probably perform along par with old-folks. This also might mean that quite a few seniors may have issues getting a yahoo account though.
pix (Score:2)
I FAILED, so i am a computer ..?? (Score:2, Interesting)
http://www.captcha.net/cgi-bin/pix
I saw turtles. Turtles of which some were swimming. So i typed turtles.
And i FAILED.
"Result of the Test: FAIL
You entered the following word:
turtles
The possible words were:
seashell shell shells seashells"
So, i notice this test does not take into the consideration the limits of second (or generally, non-native) language. English is not my first language and i had seen nowhere that turtles and shells are different?? i saw turtles and some turtles that were in the sea. Turtles.
Uh yea. I take proudly failing in this computer or human test!!! wohoo!!
an odd tangent... (Score:3, Interesting)
Recently, I've been working with developmentally disabled people as a job coach-- Making sure they have the ability to do the job they're supposed to, and help them to understand anything that needs to happen.
Part of this is working at a local fast food restaurant. The girl I'm working with can do math fairly well, but she has problems with logic, and pattern matching.
And a few times I started thinking about her as a computer-- She can do math fine, and if I specifically tell her how to match a pattern, she can do it for a short time, but she can't do it in situations like when people order a combo. Let's say they order a #1 with onion rings and a small drink, a #6 large combo, and a kids meal, she won't be able to recognize them as "combos" (she'll read the whole thing back to them item by item.) This brings me to a whole other tangent about user interface design (why the normal methods suck, mostly), but that'll be saved until a proper time.
This has been a difficulty with her position as a cashier, but I find it interesting that I'm more or less programming her as I talk to her and re-affirm her patterns to match.
I wonder if certain disabled humans would fail any "turing" test that were given to them, because they don't have normal pattern matching ability. Furthermore, isn't it possible that instead of trying for fully developed Artificial Intelligence, we should look at perhaps emulating those with disabilities? After all, wouldn't this creation process be easier than a "fully aware", fully pattern realizing person?
AI has always interested me, but I don't know nearly enough about it. The thing that made me notice this is I keep talking to her like I would program a computer ("If this, then that, otherwise this other thing" and "While there is someone in line, take their order").
Maybe I'm off based, or this is already an accepted practice. Can anyone correct me?
Screw-ups are good (Score:4, Interesting)
For example, a common human mistake is to send friends 20 meg bitmap photos through email. There is intelligence displayed in this act, in that they actively choose to digitize photographs and send them to someone without express request from the friend. They later learn that the size and type of file makes a difference, and future pictures are sent as jpegs. This mistake is made because the person does not understand how these files are represented in a computer, they only see the picture on the monitor and are satisfied that all is well.
I would expect an intelligent system to make loads of mistakes like this, simply because it is not familiar with how things are handled with respect to humans. I would expect a computer that has intelligence to recognize that in most cases a jpeg file and a bmp file are interchangeable as far as humans are concerned. Based on this, I would expect it to prefer jpegs because of storage issues. I would expect it to infer that perhaps other datatypes can be compressed in this way. I would expect it to make the mistake that it is ok to compress a precise data file using some kind of lossful compression.
This would show intelligence, because it was drwaing conclusions from what it already knew. Since computers do not get meaning from the contents of, say, PDF files, they might be likely to screw them up using, say, jpeg compression because there are no apparent consequences for reducing the file size.
Instead of whining... (Score:4, Informative)
Yes, it's a nytimes.com link, but it's without the registration.
Re:much like with RadioShack (Score:3, Funny)
Gee, I thought you were going to say something like, "Much like with RadioShack, where you have to perform a test to see if you're talking with an intelligent being."
Re:in the year 2000 i predict! (Score:3, Interesting)
Pretty much any prediction that Turing could make about computers nearly 50 years after his death - and before the advent of transistors - would pure speculation. The fact that Turing's prediction that AIs would be indistinguishable from people in the "Turing test" was wrong, and that other projects based on sheer informational density (such as CYC) have been dismal failures, indicate that it is the purely scripted/explicit logical constraint strategy of solving this problem that is faulty. Unfortunately, the 30 years after that prediction have focused pretty much entirely on scripting and logical constraints, and other methods of artificial/computational intelligence didn't see the light of day until the 80s and 90s.
Be sure to watch further developments in modeling of neurological processes, as there is still hope along this avenue of research
Modelling !=understanding. (Score:3, Interesting)
At least when we are simulating/modelling weather we can start with base points and do comparisons.
Whereas with stuff like consciousness, I suspect even if the model is broken you might not be able to tell till much later.
If we really wanted intelligent entities which we didn't understand (how they work), there are always humans and other creatures.
The GM bunch may even concoct a few more.
Re:weird.. (Score:2)
Of course, as with other types of computer intelligence, once it becomes commonplace, AI is redefined to include everything but that.
Re:weird.. (Score:3, Funny)
Google Link (Score:2)
Turing test (Score:5, Interesting)
I don't believe that Turing proposed the Turing Test as the test to use, but rather as a "mathematical proof" that you could construct such a test.
Basically he said if you could not tell the difference between a computer and a person then you would have to say it was intellignet. ie. this is a way of establishing an upper-bound test - not necessarily that this is the best test.
Unfortunately, IMHO, the AI community and other latched onto this test and put effort into fulfilling the Tring Test rather than more practical and useful goals.
If you asked "Did you sleep well last night?" and the computer said "Me not sleep, me computer." (or some question on some other biological function) then you could probably determine the difference between a human and a computer. This need not, however, preclude machine intelligence.
Re:NYTimes (Score:3, Funny)
They don't even work.. :-P (Score:2)
404 File Not Found
The requested URL (search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=related:www
If you feel like it, mail the url, and where ya came from to pater@slashdot.org.
I thought that URL looked funny..
Re:I Found A Great Deal of Resources on AI (Score:4, Funny)
If that's the case, (Score:2)
Re:Captcha's is a word? (Score:2)
(and I do have mod points today, I just feel better actually ranting about this one)