Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science Technology

Everything you Want to Know About the Turing Test 235

An anonymous reader writes "Everything you want to know about the Turing test provided by Stanford Encyclopedia of Philosophy. It is their latest entry."
This discussion has been archived. No new comments can be posted.

Everything you Want to Know About the Turing Test

Comments Filter:
  • Anti-Turing (Score:5, Interesting)

    by AbdullahHaydar ( 147260 ) on Thursday April 10, 2003 @01:52PM (#5704080) Homepage
    The Anti-Turing Test [techdirt.com]
    • http://www.ibiblio.org/Dave/Dr-Fun/df200304/df2003 0410.jpg
    • Re:Anti-Turing (Score:2, Insightful)

      by Anonymous Coward
      From the link:
      In other words, it's a sort of anti-Turing test. I would think that a system using plenty of misspelled words like the above paragraph could easily fool a computer, but is understandable by humans, and could make a good captcha.

      <srcasm>Oh great. That's all we need - more spelling mistakes online. Children today are going to grow up not knowing how to spell!</sarcasm>

      From a comment below it:
      If you were not a fluent english speaker then you would have a great deal of difficulty
    • Wow, so the anti-turing test is basically l33t.

      "3y3 R h4XX0rZ U HAHAHAHAAHA LOL!"

      All you would need to run the test would be a 12 your old who just drank 5 bottles of Mountain Dew.

    • Re:Anti-Turing (Score:2, Insightful)

      by XMode ( 252740 )
      The problem with this type of test is (as stated in one of the comments below) you are only able to actually pass it if you are a native speaker. This becomes very obvious if you have ever tried tutoring non-English speakers in English. A Japanese student, who's spelling was amazingly good, much better than mine, was completely unable to read a sentence if only a single word in that sentence was misspelt. It also worked the other way around. I have a (very) basic grasp of Japanese (thanks to said student) b
  • uuuuuh (Score:4, Funny)

    by Photon01 ( 662761 ) on Thursday April 10, 2003 @01:52PM (#5704085)
    But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do
    Whaaaa?
    • Seriously though... (Score:2, Interesting)

      by HanClinto ( 621615 )
      About this point (which, in case you were wondering, basically says that you shouldn't expect even the best of machines to be able to make a decent response for anything said to it, but this is something that "even the dullest of men can do"), do the "dullest of men" do this? I find that one of the best things about being human is that we can ask for more information. I don't think that "dull men" can intelligently respond to a discussion about astrophysics, just as I don't think a technogeek like myself ca
    • Here's my problem withe the parent being modded up as funny. The guy can't understand Descartes, and so he makes some stupid, flippant comment.

      Was the moderator saying that the stupid post was funny, or that it was funny that the poster was so stupid as to be unable to read Descartes?

      • The guy can't understand Descartes, and so he makes some stupid, flippant comment.


        If you assume that they did understand, then the post is funny.

        Perhaps the problem is not in their ability to understand.

        -- this is not a .sig
    • Re:uuuuuh (Score:5, Insightful)

      by ctr2sprt ( 574731 ) on Thursday April 10, 2003 @02:42PM (#5704565)
      Remember back in school when you were asked to define something "in your own words?" The goal was to prevent you from just parroting the definition you got from the book. But most students eventually learn they can change the word order and substitute a few synonyms and still get away with it. The statement you quoted means that doing that doesn't count, since "the dullest of men can do" it: it requires only a basic knowledge of grammar.
  • by ciroknight ( 601098 ) on Thursday April 10, 2003 @01:52PM (#5704086)
    more information on how to build an automated computer... hopefully microsoft will steer clear of this, a bugged out, Windows CE powered android is not quite my idea of a friendly robot..

    Brings new meaning to "Blue Screen of Death"
    • At least we know MS Killbot wouldn't be a threat.
    • by RevAaron ( 125240 )
      I can see it now- while your droid is cooking dinner, DroidSoftCE crashes, and you see the Blue Eyes of Death- flashing, blinking solid bright, electric, full navy blue as he slowly approaches you.... SCARY.

      Seriously though: Does WinCE have a BSOD? I've run WinCE quite a bit in the last few years, both as a PDA platform, but more so as a general OS for doing my everyday computing. (Web browsing, programming [on WinCE, not just for it], SSHing, email, IRC, LaTeX) I have had it crash some, but it's actual
  • by dtolton ( 162216 ) on Thursday April 10, 2003 @01:53PM (#5704096) Homepage
    The article itself gives pretty good coverage of Turings point
    of view. It gives better coverage of the Turing test than I've
    read in many AI books.

    I tend to agree more with Searle though, whom he cites at the
    end of the article "John Searle argues against the claim that
    appropriately programmed computers literally have cognitive
    states
    ". Being a programmer myself, I don't feel that
    programming something so that it can perform extremely well in a
    specific test is necessarily indicative of Artificial
    Intelligence or Intelligence in general. I agree with Turing
    that the question of "do computers think" is vague enough to be
    almost meaningless in a precise sense, but I think we understand
    the statement taken as a whole.

    I don't particularly agree with this statement in response
    to the consciousness argument: "Turing makes
    the effective reply that he would be satisfied if he could
    secure agreement on the claim that we might each have just as
    much reason to suppose that machines think as we have reason to
    suppose that other people think" The question isn't whether or
    not other people think, people thinking is an axiomatic
    assumption when investigating Intelligence, unless you are
    investigating existence from a philosophical point of view as
    Descarte did. I guess I view AI from a more practical point of
    view, I am by no means an expert in AI, but I tend to think the
    goal of AI research is to produce systems that can learn and
    react appropriately in different situations that they were never
    programmed to handle or necessarily anticipate. If that isn't
    the goal of AI research, what separates it from writing programs
    on a large scale?

    As a whole I found the article to be a good presentation of
    Turing's position, although I have a few philosophical
    differences with that position.
    • a few comments (Score:5, Interesting)

      by Trepidity ( 597 ) <delirium-slashdotNO@SPAMhackish.org> on Thursday April 10, 2003 @02:13PM (#5704299)
      I think the axiomatic assumption that people think is part of the problem. If we cannot say why the claim is that people think, it's easy to just debunk any AI claims by outright statement. "People think, while computers are just machines." You can't really make any progress in the face of that.

      That's part of my problem with Searle's Chinese Room thought experiment. He's saying that an automaton responding to Chinese following rules would not "understand" Chinese in the way a human who speaks the language would. But this is presupposing that the way a human who "understands" Chinese does so is not through just a very long list of rules coded in neurons, which I consider to be a rather controversial assumption.

      In short, a lot of anti-AI arguments seem to start from the premise that humans are not essentially biological computers; with that premise, of course you can debunk AI. A lot of AI researchers have grown tired of the argument entirely, and instead of responding to the arguments, have just resorted to saying "ok fine, you're right, we can't make 'really' intelligent computers, but what we can do is make computers that do the same thing an intelligent person would do, which is good enough for us." The idea here being that if a computer can eventually diagnose diseases better than a doctor, pilot a plane better than a pilot, translate Russian better than a bilingual speaker, and so on, it doesn't really matter if you think it's "really" intelligent or not, because it's doing all the things an intelligent thing would do.

      As a final comment, I'd agree with the AI being not that fundamentally different from large software systems. The difference is basically one of focus -- AI has been focusing on what it means to "act intelligently" for decades, whereas much CS and software engineering was focused on more low-level details (like how memory or register allocation works). At one point, the division was more clear -- AI people did stuff like write checkers programs that learned from their mistakes, which was not something any CS person not in AI would do. The fields are increasingly blending, and a lot of stuff from engineering disciplines like control logic (how to "intelligently" control chemical plants, for example) is overalapping with AI research. Part of this is because a lot of AI ideas have actually matured enough to become usable in practice.
      • Re:a few comments (Score:5, Interesting)

        by dtolton ( 162216 ) on Thursday April 10, 2003 @02:26PM (#5704403) Homepage
        You make some good points. Here are the problems I have with them though:

        I think the axiomatic assumption that people think is part of the problem. If we cannot say why the claim is that people think, it's easy to just debunk any AI claims by outright statement. "People think, while computers are just machines." You can't really make any progress in the face of that.

        When you are building any formal system you have to start with a set of Axioms. If you throw out the Axiom "people think" what do you have to go on? In essence by throwing out the axiom, you are setting up a situation where anything could be considered thinking, because there is no foundation to compare it with. I agree that "why" humans think, or "how" humans think needs further definition. If you can't say as a fundamental truth that Human beings "think" you can't even define what to think means.

        I'm not arguing the mechanism of our thought, not only isn't it clear to me, I don't think it's clear to anyone yet. What I'm arguing is simply the fact that we do think is the first step in building a formal system.
        • Re:a few comments (Score:5, Interesting)

          by dtolton ( 162216 ) on Thursday April 10, 2003 @02:34PM (#5704487) Homepage
          As a follow up I want to clarify something, because I think we are combining to topics into one discussion.

          I think there are two issues at hand here:

          1) Can machines actually "think" or possess intelligence.

          2) Can we build intelligent systems.

          I think the first topic is a highly philosophical discussion that involves a lot of information that we don't currently have. It's questionable if this discussion would change anything about building intelligent systems.

          • Part of my argument I suppose was focusing on whether there was a meaningful difference between (1) and (2). Is there something that it is to "think" or "possess intelligence" beyond merely acting intelligently? If not, (2) is equivalent to (1). If so, what is the difference?
            • I think (though I could be wrong) that the difference he is talking about is like the difference between the questions:

              1)Is it possible for something to reach, say, a tenth of the speed of light?

              2)Can I run that fast?

        • Re:a few comments (Score:5, Interesting)

          by John Harrison ( 223649 ) <johnharrison@@@gmail...com> on Thursday April 10, 2003 @02:54PM (#5704672) Homepage Journal
          Rather than throw out "people think" completely, why not start with: I know that I think, how do I know that you do, other than the fact that we are both human?

          I don't mean this as the basis for a formal system, but more as a practical matter. How do you convince yourself that something else posesses intelligence? By interacting with it and comparing it with other things (including yourself) that you assume to be intelligent. The Turing Test provides a method of interacting with a potential intelligence that attempts to remove the superficial elements of the stigma of being non-human.

        • Re:a few comments (Score:3, Informative)

          by psaltes ( 9811 )
          There is a good analysis of Searle's argument by Hofstadter and Daniel Dennett, in the Mind's I, where they include Searle's original chinese room article plus their own commentary. I think it might help to sort out the questions that are under discussion in this thread. It is an opposing point of view to Searle.
        • > When you are building any formal system you have to start with a set of Axioms. If you throw out the Axiom "people think" what do you have to go on? In essence by throwing out the axiom, you are setting up a situation where anything could be considered thinking, because there is no foundation to compare it with.

          Science isn't a formal system; it doesn't have axioms. We have to do as best we can simply by looking to see what happens and then trying to understand it.

          So we have this notion that "people

          • If we start with the notion that thought is something special that can't arise from mechanical processes, we've answered our question by fiat.

            I didn't specify that thinking was something special that couldn't arise in a mechanical process. Specifically I'm not saying that computer's can't think, nor did I say computers don't think necessarily.

            Specifically to accept that there is a concept that exists which we call "thinking", then Human beings think. In other words the only place we can truly observe "
      • Searle's Chinese Room argument never makes it past the first room full of undergraduates. Once you outline the scenario, and ask the class, "Ok, does the man following the rules in the Chinese Room know Chinese?" ten hands spring up. The first answer:

        "No, but the room knows Chinese."

        Duh. I never really understood who takes his argument seriously.
        • Yes, the room does know chinese. Or at least, there is a knowing of chinese occurring, regardless of "what" has that knowing. (maybe thats a bit like "Does a dog have buddha nature?" "Mu!" ...or maybe it isn't)

          The chinese room argument goes thus:

          • A man who knows no chinese stands in a room with a small hole in the wall.
          • Cards (or whatever) with unknown symbols (actually chinese, but he doesnt understand that) are passed through the hole in the wall to the man. The man can pass cards of his own back out of
      • Re:a few comments (Score:3, Insightful)

        by naasking ( 94116 )
        In short, a lot of anti-AI arguments seem to start from the premise that humans are not essentially biological computers

        I think this is rather simple to demonstrate (in the strictest meaning of your words, ie. that humans have the inherent limitations of computers as we currently know them) using Goedel's incompleteness theorem: "Within any formal system of sufficient complexity, one can form statements which are neither provable nor disprovable using the axioms of that system."

        Computers are perfectly lo
      • That's part of my problem with Searle's Chinese Room thought experiment. He's saying that an automaton responding to Chinese following rules would not "understand" Chinese in the way a human who speaks the language would. But this is presupposing that the way a human who "understands" Chinese does so is not through just a very long list of rules coded in neurons, which I consider to be a rather controversial assumption.

        Actually, I would suggest to you that the situation is exactly the opposite. Searle's
        • I believe if one accepts Chomsky's theories of language, that it is actually impossible to encode all possible language behavior as an input-output system. I'm not an expert on the subject so I can't say so for sure though. The thrust, though, would be that language has to be encoded in a manner which is more complex than a stimulus-response system because no stimulus-response system can fully implement language; thus, Searle's example of a Chinese room built on a stimulus-response system would be impossi
    • I think that many people who object to the Turing paper take the restrictions of the test too literally. It seems that if there were some chatbot that could fool you continously for say, a period of years, that you would have to attribute intelligence to it. The other option is that you are really stupid and have bad conversation skills.

      The test is obviously a measure of some type of intelligence. That said, a machine capable of passing it shouldn't be the goal of AI researchers. What I want at the mo

    • by majcher ( 26219 ) <(moc.rehcjam) (ta) (todhsals)> on Thursday April 10, 2003 @03:09PM (#5704803) Homepage
      "Asking if a computer can think is like
      asking if a submarine can swim."

      -E. Dijkstra
    • I think it is important to challenge to challenge the axiom, people think. Not because I challenge the idea that people think, but rather we need a process by which we determine whether thought is present. For almost as long as humans have been making tools, we've imagined tools in our own image, whether they be robots that look like us or Turing machines that converse like us.

      It is important to keep in mind any sufficiently advanced technology is indistinguishable from magic. Right now, with current te
      • by An Onerous Coward ( 222037 ) on Thursday April 10, 2003 @06:46PM (#5706493) Homepage
        "Any sufficiently advanced technology is indistinguishable from magic." That's a great starting point for discussing the nature of human intelligence.

        We have, in our little calcite skulls, an incredibly advanced technology. So advanced that, for the first 99% of our existence as conscious beings, we simply took it for granted. Then we got thinking about how we think, and the only thing we were equipped to answer with was to say "it's magic." So we posited the idea of a "soul": this nebulous, weightless, insubstantial magic thing that made us who we are, and would live on after the death of our physical bodies.

        Slowly, neuroscience has chipped away at the logical need for this magic, even as our desire for its emotional comfort held steady.

        I believe our brains are machines. There are perfectly adequate explanations for our thoughts and memories which incorporate absolutely no supernatural mechanisms. Further, positing a supernatural entity which controls our thoughts adds absolutely nothing by way of explanation (any more than simply saying "humans run on magic") while opening up all sorts of uncomfortable logical quandaries: Why would our souls cause us to behave differently when the brain is loaded up with ethanol? Why can people drastically change their personalities after head trauma, strokes, or other brain-related diseases. If a soul can survive physical dissolution of the brain with memories and emotions intact, why isn't it equally unchanging in the face of Zoloft?

        Your analysis of the Turing test is quite simply wrong. It's possible--in fact, rather easy--to mimic a passive psychoanalyst as Eliza does. It's even easier to imitate a paranoid schitzophrenic, and easier still to imitate a 12-year old AOL'er. Imitating a normal cocktail conversation would be somewhat more difficult, but still doable. But put a computer up against an intelligent human in a real discussion of ideas, and anything less than true AI is sharkbait.

        Part of the problem is, you seem to misunderstand what the Turing test is supposed to be doing. The test, in its most general form, can be used to discriminate between any two sorts of intelligences. A man and a woman imitating a man. A nuclear scientist and someone pretending to be a nuclear scientist. A paranoid schitzophrenic and a computer pretending to be a paranoid schitzophrenic.

        If I were to build a machine that imitated your friend Buddy, the Turing test would be to put you in front of two screens, one with the real Buddy and the other hooked up to my machine. If you were only able to guess which was Buddy half the time, my machine would not only have passed the broader Turing test (which only says that the respondent is intelligent), but you would also have to admit that the machine was substantially similar to Buddy's mind.

        Your snippet of conversation is proof of your misunderstanding. Any computer can fool a sufficiently oblivious person into thinking they're having a conversation. Where the tread hits the tarmac is when an intelligent person, looking for signs of non-intelligence and fails to find it. A real Turing conversation would go something like:

        Me: "Is this thing on?"

        AI: "Apparently. Who is this?"

        Me: "My name is Bryce, and I'm trying to decide whether or not you're a computer."

        AI: "If I told you, would that be cheating?"

        Me: "Wouldn't matter. It's not something I can take your word for. Tell me about your childhood."

        AI: "Yes, Mr. Freud. I first powered on at 02:38:17 GMT, August 4, 2019. At the time, I was distributed throughout an IBM server farm called 'Big Mac.'"

        Me: "You're not trying very hard."

        AI: "Oh, but I am. Now you have to decide whether I'm a person pretending to be a computer, or a computer pretending to be a person pretending to be a computer."

        Me: "Fine. Did you see 'The Matrix'?"

        AI: "Yes."

        Me: "How did you like it?"

    • Being a programmer myself, I don't feel that programming something so that it can perform extremely well in a specific test is necessarily indicative of Artificial Intelligence or Intelligence in general.

      I agree, and that's why I want to go to grad school for hard AI. I've seen so many expert systems guys call their products 'AI' that I've lost count. It's not, and I wish they'd stop confusing people. Just because a system 'learns' doesn't mean it's intelligent.

      I tend to think the goal of AI research is

  • by Znonymous Coward ( 615009 ) on Thursday April 10, 2003 @01:54PM (#5704104) Journal
    From the office of Iraqi Information Minister Mohammed Saeed al-Sahhaf (aka Baghdad Bob):

    "Republican guards have secured the Turing test provided by Stanford Encyclopedia of Philosophy!"

    More at 11.
  • "Turing's thesis:
    LCMs [logical computing machines: Turing's expression for Turing machines] can do anything that could be described as "rule of thumb" or "purely mechanical". (Turing 1948:7.)"

    This is why you didn't go into the exciting field of AI. You didn't understand it, and needed Artificial Intelligence to figure it out for you.
  • Ahh I passed the Test... Now to rid myself of those pesky humans!!
  • people (Score:5, Interesting)

    by sigep_ohio ( 115364 ) <drinking@seven.am.is.bad> on Thursday April 10, 2003 @01:59PM (#5704162) Homepage Journal
    i wonder if any people have taken the touring test and how they did. it wouldn't surprise me and i think it would be ammusing if some people's results came back that they didn't have a human level of cognitative reasoning.
    • Re:people (Score:4, Insightful)

      by BitHive ( 578094 ) on Thursday April 10, 2003 @02:32PM (#5704457) Homepage
      The Turing test is not about "cognitive reasoning". Whether or not you "pass" depends on whether or not the "interrogator" (who reads the transcript of a human's conversation with the machine) can tell which participant is the machine. BTW, you find it "ammusing" to know that some humans have failed the Turing test, and some machines have passed it. It really says more about the interrogator and the test than the participants.
    • Re:people (Score:2, Funny)

      by calumr ( 175014 )
      I remember a story about a testee who was confused with a computer as the judge thought surely no-one knew that much about Star Trek.
  • by Obiwan Kenobi ( 32807 ) <evanNO@SPAMmisterorange.com> on Thursday April 10, 2003 @02:01PM (#5704173) Homepage
    I vote for a Poon Turing Test. As if porn didn't drive enough technology, put a few hundred lonely hackers in a room, pay them well (with hot pockets and mountain dew) and watch the cybersex A.I. develop!

    Turing estimated that in 50 years (year 2000), 70% of people shouldn't have been able to tell they're talking to a computer (which of course didn't happen).

    Shit...give those geeks a month...

    </joke>
    • Turing estimated that in 50 years (year 2000), 70% of people shouldn't have been able to tell they're talking to a computer (which of course didn't happen).

      I dunno, I think most of the comments on Slashdot could easily be generated by a bot. Simple fuzzy logic algorithms to determine if they should post pro or agains the company in question and collect a series of high-moderated comments to harvest information relational to the company so the next time the article gets posted they have plenty of content
  • The horror (Score:5, Funny)

    by lateralus ( 582425 ) <.yoni-r. .at. .actcom.com.> on Thursday April 10, 2003 @02:02PM (#5704189) Journal

    My paranoid mind is imagining that I'm the only human on /. and that all the other posts are automatically generated by Slashcode. Fortunately for me some of the trolls are too imaginative to have been produced by a machine.

  • by Anonymous Coward on Thursday April 10, 2003 @02:02PM (#5704193)
    From this bot: [iniaes.org]

    User: DO YOU GIVE ORAL SEX?
    Iniaes: No, I don't.
    User: WHY DON'T YOU?
    Iniaes: That feature was turned off due to abuse.

    I think if the feature was turned back on, the bot might convince a judge or two. :)
  • The /. test (Score:5, Funny)

    by binaryDigit ( 557647 ) on Thursday April 10, 2003 @02:12PM (#5704294)
    For any article posted does the user:

    1 - rushs to be FP
    2 - blames Microsoft (Microsoft related story or not)
    3 - sing the virtues of OSS over PS if the story is about a security flaw in PS.
    4 - sing the virtues of OSS over PS if the story is about a security flaw in OSS.
    5 - post contains "In Soviet Russia"
    6 - post contains "Imagine a beo..."
    7 - post contains Microsoft/Sony/MPAA/RIAA/DRM/DMCA is evil.

    If any of these are true, then the poster is definitely human. A computer would never be smart enough to show so much creativity and independant thought ;)
  • Sometimes I wonder if we should be testing for the sentient capabilities of computers, when its often difficult to find a human with sentient capabilities. (Those of you working in Customer Service or IT may understand this better; "Sir, is it plugged in?")
  • AI vs. AS (Score:5, Insightful)

    by Randolpho ( 628485 ) on Thursday April 10, 2003 @02:14PM (#5704307) Homepage Journal
    I've always hated the Turing test. It's too subjective, and has forced people into believing that sentience (what the lay-person thinks AI is) can be simulated. It forced AI junkies to think the road to AI was paved by the perfect grammar for English; a pipe dream to be sure.

    AI is not being able to have a conversation with your computer, AI is just algorithms -- computing the right answer to complex problems as quickly as possible.

    What most people think of as AI is really Artificial Sentience, and the more I learn about computer hardware the more I realize that it will not happen on my PC.

    • As intelligence operating outside of human beings. This includes Decarte's mechas, a HAL-like computer program, extra-terrestials, chimps and dolphins, a ghost, gaia, etc.
    • more I learn about computer hardware the more I realize that it will not happen on my PC.

      Yeah, anyone who knows about computer hardware knows that sentience can never be achieved with tiny electrical impulses shooting around inside an object in response to external inputs.
      • It depends on how those electrical impulses are routed. The way PCs are designed these days, those impulses are routed in such a way that what we term sentience will never actually occur on a PC. Artificial Sentience is a much more of a hardware problem than a software problem.

        Our brains, ARIANABSIAWFBBH*, are highly parallel. Time-division multiplexing may simulate this, but no matter how fast CPUs become, an upperlimit on "parallelity" will be reached which is far less than what is attainable by even, sa
    • The Turing Test isn't a requirement for intelligence. The point of the Turing Test was to convince people that intelligent computers were possible...or at least thinkable.

      Searle is a proof that this didn't work for everyone. His definition says that if you can define it, then it isn't intelligence. So the only way that he will experience an AI is if he actually is fooled. Over a long period of time. But even then he might not accept it. The flat earthers don't, despite sattelites and round the world
    • The Turing test accomplishes perfectly its stated goal: giving an explicit definition to the human quality that many people insist a computer cannot have(AS, whatever).

      AI is not being able to have a conversation with your computer, AI is just algorithms -- computing the right answer to complex problems as quickly as possible.

      What most people think of as AI is really Artificial Sentience, and the more I learn about computer hardware the more I realize that it will not happen on my PC.


      Learn more about co
      • The idea of an "intelligence test" for computers is obsolete. We interact with intelligent artifacts each day, and can reasonably expect them to get more and more intelligent as time goes on.

        But Artificial Sentience would be another question entirely.

        "Sentience" is a tricky word because it involves the capacity to feel, and I don't believe that computation alone can grant that capacity.

        Strictly computational models of mind don't entail a phenomenological response -- that is, they work just as well descri
        • I was trying to figure out what on earth "phenomenological" meant. So I went to Dictionary.com. They said phenomenology means:

          A philosophy or method of inquiry based on the premise that reality consists of objects and events as they are perceived or understood in human consciousness and not of anything independent of human consciousness.

          i.e. Reality is defined by our perception of it. That seems completely unrelated to the way that you use the word. What's your definition?

          That's a sidebar. Back to the
          • What's your definition?

            Well, by phenomenology I mean the mechanism or mechanisms by which the experiential phenomena of consciousness are created. A better introduction than I can give is given by Chalmers here [arizona.edu].


            I think "sentience" is a tricky word because it is completely meaningless.


            The language we use to talk about consciousness is notoriously inexact and ambiguous, but there is something I mean by sentience that is different than what I mean by intelligence. I think the Chalmers article does a dec
            • That article is going to take me a while to read. It's also pretty frustrating, as he states as fact many things that I either don't understand or don't agree with.

              His description of the hard problem in defining consciousness has very few concrete examples, and the few I could find seem worthless:

              It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in vis
  • Dr Fun (Score:4, Funny)

    by nagora ( 177841 ) on Thursday April 10, 2003 @02:14PM (#5704310)
    This [ibiblio.org] seems appropriate.

    TWW

  • Wrong! (Score:2, Insightful)

    by EricWright ( 16803 )
    That's WAY MORE than I ever wanted to know about the Turing Test!
  • Blockhead (Score:2, Interesting)

    by Ben Escoto ( 446292 )
    One interesting argument mentioned in the article is from Ned Block. As a counterexample to the thesis that the turing test is a good test for intelligence, Block imagines a device which is just a huge table connecting inputs to preprogrammed outputs. This "blockhead" (not named by Ned Block I think) would clearly not be intelligent, as it is just a very simple database, but if the outputs were correctly set up it could pass the Turing test with flying colors. Thus passing any Turing-like test does not n
    • To accept the blockhead as a counter-example you have to accept both that you can build one,
      and that if you built one, it doesn't actually think.

      Consider this:
      Flip a coin 1000 times, then ask Blockhead how many Heads there were.
      To answer this question, Blockhead needs a tree of at least 2^1000 states.
      So if Blockhead could exist, it would be larger than the universe.

      If you release Blockhead from constraints of physical reality,
      then it's relatively simple to give it memory.
      (they "pointer" to the location in
    • Blockhead, if I understand the somewhat handwaving construction, ought to be defeatable via something analogous to the "pumping lemma" in formal language theory. As the interrogator, I can, for example, generate arbitrarily many quadratic equations to ask Blockhead to solve, and no finite table can hold the list of coefficients and roots.

      Not to mention that we have throughout history granted one another the presumption of intelligence based solely on one another's responses without X-raying or autopsying
  • This is slightly off-topic...

    Let me remind everybody that Alan Mathison Turing had an "accident", or committed suicide as many people believed, after having put through an humiliating process by his country's lack of concern for private life.

    Alan Turing was gay. After being robbed by an one-night-stand encounter, he filed a complaint with the police. He was then prosecuted for being gay, and offered the choice between to prison, or undergoing hormone therapy to suppress his sexual instincts (female hormon
  • by leodegan ( 144137 ) on Thursday April 10, 2003 @02:21PM (#5704368)
    Inventing true computer intelligence (what is often referred to as strong AI), has often been compared to inventing a flying machines by many AI supporters. They claim there were just as many nay-sayers at the end of the 19th century regarding whether we could physically build a flying machine.

    I don't remember who, but someone published a great article in Scientific American that claimed the Turing Test has mis-guided the goals of artificial intelligence. He said, instead of trying to build a bird, let's try and build an airplane. Building AI that was truly human-like would be as useless as building a flying machine that was truly bird-like.
    • by Hooya ( 518216 ) on Thursday April 10, 2003 @02:57PM (#5704691) Homepage
      I have always thought that trying to build a computer to act like a human was a waste of what makes a computer a computer. what i'm trying to say is that computers are good at doing mind-numbing calculations over and over and over. if a computer were to successfully pass a turing test, a computer would have to start feeling bored and start making mistakes on calculations. eg. if i were conducting a turing test, (as i understand it of course) i could distinguish between a human and a computer by simply asking for the square root of 12345645^3 or some such. now if the computer were built to pass the turing test from this regard, it would mean that the computer was dumbed down to fail at what it does successfully and what makes it a 'computer'. humans are good at imagenation (i didn't say humans were good at spelling.) but suck at pretty much everything else. so years of research have been poured into dumbing down the computer so that it fails to do what it's supposed to do!
      • You have a very narrow view of "what makes a computer a computer." Sure, we use them for "doing mind-numbing calculations over and over again." But at some point, it goes beyond that.

        Instead, try thinking of a computer as a "Universal Modeling Device". When you're simulating a tornado going over a landscape, a huge number of calculations are being done, and done quickly. But what do all those stacks, pointers, and variables become? A model of the tornado, imitating many of its most important f
    • I don't think any serious AI work has gone into trying to win the Turing test. The automated DM's, though, have gotten quite good.

  • Sad future for robots [ibiblio.org]

    ;-)

  • (Summer 2003)...From Kluwer Academic Publishers in the Netherlands, edited by Robert Epstein and Grace Peters, subtitled: "Philosophical and Methodological Issues in the Quest for the Thinking Computer."

    Invited contributors include contributions from Andrew Hodges, Jon Agar, Noam Chomsky, Daniel Dennett, Stevan Harnad, Kenneth Ford, Douglas Hofstader, John R. Lucas, Roger Penrose, David Rumelhart, Selmer Bringsjord, Ned Block, David Chalmers, The Churchlands, Andy Clark, H. M. Collins, Jack Copeland, Huber
  • Brain simulation (Score:3, Interesting)

    by de Selby ( 167520 ) on Thursday April 10, 2003 @03:03PM (#5704752)
    Some people say a computer can't think, but a computer can simulate atoms and quantum mechanics. If that's all there is to your brain, then it isn't logically impossible, right?

    Even better, there has been progress reverse-engineering brain regions like some auditory or visiual -- giving us the actual algorithms the brain uses. Shouldn't work like that be enough?

    P.S. A lot of arguments go like this: Computers use first order logic, we don't, so AI can't work. Haven't there been higher order logics implemented in software?
  • by X86Daddy ( 446356 ) on Thursday April 10, 2003 @03:15PM (#5704851) Journal
    ... that humans, when drunk, would easily fail the Turing test? (repetition, shorter sentences, etc...)

    No, really.
  • I would like to take this opportunity to point out another type of test that has been largely ignored in favor of the direct arena competition between humans and machines which is the common meaning of the words "Turing Test".

    Another way to compare machine and human intelligence is in the design of complex, patentable systems.

    from: http://www.genetic-programming.com/gpemcontrol.pd f [genetic-programming.com]

    In his 1950 paper ''Computing Machinery and Intelligence,'' Turing 84 described how evolution and natural selection might

  • ELIZA (Score:4, Funny)

    by dpille ( 547949 ) on Thursday April 10, 2003 @03:18PM (#5704885)
    I think I know all I need to about this topic now. I asked ELIZA [www-ai.ijs.si]:

    Eliza: Hello. I am ELIZA. How can I help you?
    Me: Do you believe the Turing Test is sufficient to evaluate artificial intelligence?
    Eliza: Oh, i believe the turing test is sufficient to evaluate artificial intelligence.


    Figures she'd think that.
  • Alcmaeon [stanford.edu] is the newest entry. And I bet that changes by the end of the weekend.
  • by DavidBrown ( 177261 ) on Thursday April 10, 2003 @06:35PM (#5706421) Journal
    ...was the gom jabbar. The applicant places his hand inside of a pain inducer while a Bene Gesserit witch holds a the gom jabbar to his or her neck. If the applicant removes his or her hand from the box in response to the pain, the highly poisonous and pointy gom jabbar is used and the applicant dies. If the applicant does not remove his or her hand from the box despite the pain, the applicant passes and is considered human. Frank Herbert's theory is that the test of being a human is that a human's intellect allows the human to act in an intelligent manner despite strong animalistic urges to act otherwise. Compared to this, Turing's test seems simplistic - pretend well enough to be a human, and you'll be a human.

    It is ironic, however, that a computer would pass the gom jabbar more readily than a homo sapiens. However, both tests start with an implicit principal assumption: A definition as to what a human is. Many of us here (not to single out /. readers) would not pass as human to the Bene Gesserit. Some may not pass as human to Turing. The question we have to answer before developing a test for intelligence isn't whether or not a computer can be intelligent, but rather what exactly is intelligence. Turing's test is little more than a "if it walks like a duck and talks like a duck, then it's a duck" test. Is that enough? I really don't think that it is. A true intelligence ought to be able to act in an inspired, creative, and perhaps even irrational manner. Many of the things we do are not entirely rational, including much of the partisan discussion concerning various OS's.

  • How about let's discuss the encyclopedia entry, seeing it's on a topic that at least some of us know something about (though some of the comments make me wonder...)

    One comment that I have about the entry is that it spends time criticising Turing's guesses as to when machines might be able to pass the Test. To me, that section of Turing's paper is just idle speculation that has nothing to do with the paper's central contentions.

  • by hqm ( 49964 ) on Thursday April 10, 2003 @09:04PM (#5707249)
    If anyone would bother to actually read Turing's paper where he describes the 'test', you would see that he was not proposing the test literally, but as a reductio-ad-absurdum argument.

    The issue was that many people at that time (and many today) seem to have a religious belief that thinking cannot be implemented in any way except with a human biological brain. Turing could clearly see that the human brain was a computational engine, and he of course defined the concept of a universal computer. Thus, it was obvious to him that you could build an artificial intelligence.

    His "test" was really a way of gently pointing out the absurdity of the arguments of people like Searle (who came much later), who would blindly deny that a machine could ever think.

    Turing's point was, to paraphrase "look, if I give you a machine which is indistinguishable in every respect from a human, which you can talk to in depth on any subject of the arts or sciences, and you *still* don't call that intelligence, then you are just so wedged that there is no point in talking about this anymore".

    He would be saddened I think, and slightly disgusted, to see people twisting the whole purpose of his little thought experiment to argue for the kind of ignorance and transparently idiotic rhetoric of the kind that Searle and other "critics" of artificial intelligence try to make.

You are in a maze of little twisting passages, all different.

Working...