Everything you Want to Know About the Turing Test 235
An anonymous reader writes "Everything you want to know about
the Turing test provided by
Stanford Encyclopedia of Philosophy. It is their
latest entry."
If I want your opinion, I'll ask you to fill out the necessary form.
Anti-Turing (Score:5, Interesting)
Cramming for your Turing test... (Score:4, Funny)
How about a real link... (Score:2)
Re:Anti-Turing (Score:2, Insightful)
In other words, it's a sort of anti-Turing test. I would think that a system using plenty of misspelled words like the above paragraph could easily fool a computer, but is understandable by humans, and could make a good captcha.
<srcasm>Oh great. That's all we need - more spelling mistakes online. Children today are going to grow up not knowing how to spell!</sarcasm>
From a comment below it:
If you were not a fluent english speaker then you would have a great deal of difficulty
Re:Anti-Turing (Score:2, Funny)
Wow, so the anti-turing test is basically l33t.
"3y3 R h4XX0rZ U HAHAHAHAAHA LOL!"
All you would need to run the test would be a 12 your old who just drank 5 bottles of Mountain Dew.
Re:Anti-Turing (Score:2, Insightful)
uuuuuh (Score:4, Funny)
Seriously though... (Score:2, Interesting)
Re:uuuuuh (Score:2)
Was the moderator saying that the stupid post was funny, or that it was funny that the poster was so stupid as to be unable to read Descartes?
Re:uuuuuh (Score:2)
If you assume that they did understand, then the post is funny.
Perhaps the problem is not in their ability to understand.
-- this is not a
Re:uuuuuh (Score:5, Insightful)
just what we need (Score:4, Funny)
Brings new meaning to "Blue Screen of Death"
Re:just what we need (Score:2, Funny)
Re:just what we need (Score:3, Insightful)
Seriously though: Does WinCE have a BSOD? I've run WinCE quite a bit in the last few years, both as a PDA platform, but more so as a general OS for doing my everyday computing. (Web browsing, programming [on WinCE, not just for it], SSHing, email, IRC, LaTeX) I have had it crash some, but it's actual
Good Summary of Turings Position (Score:5, Interesting)
of view. It gives better coverage of the Turing test than I've
read in many AI books.
I tend to agree more with Searle though, whom he cites at the
end of the article "John Searle argues against the claim that
appropriately programmed computers literally have cognitive
states". Being a programmer myself, I don't feel that
programming something so that it can perform extremely well in a
specific test is necessarily indicative of Artificial
Intelligence or Intelligence in general. I agree with Turing
that the question of "do computers think" is vague enough to be
almost meaningless in a precise sense, but I think we understand
the statement taken as a whole.
I don't particularly agree with this statement in response
to the consciousness argument: "Turing makes
the effective reply that he would be satisfied if he could
secure agreement on the claim that we might each have just as
much reason to suppose that machines think as we have reason to
suppose that other people think" The question isn't whether or
not other people think, people thinking is an axiomatic
assumption when investigating Intelligence, unless you are
investigating existence from a philosophical point of view as
Descarte did. I guess I view AI from a more practical point of
view, I am by no means an expert in AI, but I tend to think the
goal of AI research is to produce systems that can learn and
react appropriately in different situations that they were never
programmed to handle or necessarily anticipate. If that isn't
the goal of AI research, what separates it from writing programs
on a large scale?
As a whole I found the article to be a good presentation of
Turing's position, although I have a few philosophical
differences with that position.
a few comments (Score:5, Interesting)
That's part of my problem with Searle's Chinese Room thought experiment. He's saying that an automaton responding to Chinese following rules would not "understand" Chinese in the way a human who speaks the language would. But this is presupposing that the way a human who "understands" Chinese does so is not through just a very long list of rules coded in neurons, which I consider to be a rather controversial assumption.
In short, a lot of anti-AI arguments seem to start from the premise that humans are not essentially biological computers; with that premise, of course you can debunk AI. A lot of AI researchers have grown tired of the argument entirely, and instead of responding to the arguments, have just resorted to saying "ok fine, you're right, we can't make 'really' intelligent computers, but what we can do is make computers that do the same thing an intelligent person would do, which is good enough for us." The idea here being that if a computer can eventually diagnose diseases better than a doctor, pilot a plane better than a pilot, translate Russian better than a bilingual speaker, and so on, it doesn't really matter if you think it's "really" intelligent or not, because it's doing all the things an intelligent thing would do.
As a final comment, I'd agree with the AI being not that fundamentally different from large software systems. The difference is basically one of focus -- AI has been focusing on what it means to "act intelligently" for decades, whereas much CS and software engineering was focused on more low-level details (like how memory or register allocation works). At one point, the division was more clear -- AI people did stuff like write checkers programs that learned from their mistakes, which was not something any CS person not in AI would do. The fields are increasingly blending, and a lot of stuff from engineering disciplines like control logic (how to "intelligently" control chemical plants, for example) is overalapping with AI research. Part of this is because a lot of AI ideas have actually matured enough to become usable in practice.
Re:a few comments (Score:5, Interesting)
I think the axiomatic assumption that people think is part of the problem. If we cannot say why the claim is that people think, it's easy to just debunk any AI claims by outright statement. "People think, while computers are just machines." You can't really make any progress in the face of that.
When you are building any formal system you have to start with a set of Axioms. If you throw out the Axiom "people think" what do you have to go on? In essence by throwing out the axiom, you are setting up a situation where anything could be considered thinking, because there is no foundation to compare it with. I agree that "why" humans think, or "how" humans think needs further definition. If you can't say as a fundamental truth that Human beings "think" you can't even define what to think means.
I'm not arguing the mechanism of our thought, not only isn't it clear to me, I don't think it's clear to anyone yet. What I'm arguing is simply the fact that we do think is the first step in building a formal system.
Re:a few comments (Score:5, Interesting)
I think there are two issues at hand here:
1) Can machines actually "think" or possess intelligence.
2) Can we build intelligent systems.
I think the first topic is a highly philosophical discussion that involves a lot of information that we don't currently have. It's questionable if this discussion would change anything about building intelligent systems.
is there a difference? (Score:2)
Re:is there a difference? (Score:2)
1)Is it possible for something to reach, say, a tenth of the speed of light?
2)Can I run that fast?
Re:a few comments (Score:5, Interesting)
I don't mean this as the basis for a formal system, but more as a practical matter. How do you convince yourself that something else posesses intelligence? By interacting with it and comparing it with other things (including yourself) that you assume to be intelligent. The Turing Test provides a method of interacting with a potential intelligence that attempts to remove the superficial elements of the stigma of being non-human.
Re:a few comments (Score:3, Informative)
Re: a few comments (Score:3, Insightful)
> When you are building any formal system you have to start with a set of Axioms. If you throw out the Axiom "people think" what do you have to go on? In essence by throwing out the axiom, you are setting up a situation where anything could be considered thinking, because there is no foundation to compare it with.
Science isn't a formal system; it doesn't have axioms. We have to do as best we can simply by looking to see what happens and then trying to understand it.
So we have this notion that "people
Re: a few comments (Score:2)
I didn't specify that thinking was something special that couldn't arise in a mechanical process. Specifically I'm not saying that computer's can't think, nor did I say computers don't think necessarily.
Specifically to accept that there is a concept that exists which we call "thinking", then Human beings think. In other words the only place we can truly observe "
Re:a few comments (Score:3, Insightful)
"No, but the room knows Chinese."
Duh. I never really understood who takes his argument seriously.
The real problem with the Chinese Room (Score:3, Interesting)
The chinese room argument goes thus:
Re:a few comments (Score:3, Insightful)
I think this is rather simple to demonstrate (in the strictest meaning of your words, ie. that humans have the inherent limitations of computers as we currently know them) using Goedel's incompleteness theorem: "Within any formal system of sufficient complexity, one can form statements which are neither provable nor disprovable using the axioms of that system."
Computers are perfectly lo
Re:a few comments (Score:2)
Actually, I would suggest to you that the situation is exactly the opposite. Searle's
the Chinese room may be impossible then (Score:2)
one problem with the Turing test though (Score:2)
Re:Good Summary of Turings Position (Score:2)
The test is obviously a measure of some type of intelligence. That said, a machine capable of passing it shouldn't be the goal of AI researchers. What I want at the mo
Re:Good Summary of Turings Position (Score:5, Interesting)
asking if a submarine can swim."
-E. Dijkstra
Re:Good Summary of Turings Position (Score:3, Interesting)
It is important to keep in mind any sufficiently advanced technology is indistinguishable from magic. Right now, with current te
Re:Good Summary of Turings Position (Score:5, Insightful)
We have, in our little calcite skulls, an incredibly advanced technology. So advanced that, for the first 99% of our existence as conscious beings, we simply took it for granted. Then we got thinking about how we think, and the only thing we were equipped to answer with was to say "it's magic." So we posited the idea of a "soul": this nebulous, weightless, insubstantial magic thing that made us who we are, and would live on after the death of our physical bodies.
Slowly, neuroscience has chipped away at the logical need for this magic, even as our desire for its emotional comfort held steady.
I believe our brains are machines. There are perfectly adequate explanations for our thoughts and memories which incorporate absolutely no supernatural mechanisms. Further, positing a supernatural entity which controls our thoughts adds absolutely nothing by way of explanation (any more than simply saying "humans run on magic") while opening up all sorts of uncomfortable logical quandaries: Why would our souls cause us to behave differently when the brain is loaded up with ethanol? Why can people drastically change their personalities after head trauma, strokes, or other brain-related diseases. If a soul can survive physical dissolution of the brain with memories and emotions intact, why isn't it equally unchanging in the face of Zoloft?
Your analysis of the Turing test is quite simply wrong. It's possible--in fact, rather easy--to mimic a passive psychoanalyst as Eliza does. It's even easier to imitate a paranoid schitzophrenic, and easier still to imitate a 12-year old AOL'er. Imitating a normal cocktail conversation would be somewhat more difficult, but still doable. But put a computer up against an intelligent human in a real discussion of ideas, and anything less than true AI is sharkbait.
Part of the problem is, you seem to misunderstand what the Turing test is supposed to be doing. The test, in its most general form, can be used to discriminate between any two sorts of intelligences. A man and a woman imitating a man. A nuclear scientist and someone pretending to be a nuclear scientist. A paranoid schitzophrenic and a computer pretending to be a paranoid schitzophrenic.
If I were to build a machine that imitated your friend Buddy, the Turing test would be to put you in front of two screens, one with the real Buddy and the other hooked up to my machine. If you were only able to guess which was Buddy half the time, my machine would not only have passed the broader Turing test (which only says that the respondent is intelligent), but you would also have to admit that the machine was substantially similar to Buddy's mind.
Your snippet of conversation is proof of your misunderstanding. Any computer can fool a sufficiently oblivious person into thinking they're having a conversation. Where the tread hits the tarmac is when an intelligent person, looking for signs of non-intelligence and fails to find it. A real Turing conversation would go something like:
Re:Good Summary of Turings Position (Score:3, Insightful)
I agree, and that's why I want to go to grad school for hard AI. I've seen so many expert systems guys call their products 'AI' that I've lost count. It's not, and I wish they'd stop confusing people. Just because a system 'learns' doesn't mean it's intelligent.
I tend to think the goal of AI research is
This just in... (Score:5, Funny)
"Republican guards have secured the Turing test provided by Stanford Encyclopedia of Philosophy!"
More at 11.
Re:This just in... (Score:4, Funny)
Re:This just in... (Score:2, Funny)
"Death to the Infadels"
You know, I'm really going to miss Baghdad Bob's enthusiam and nightly broadcasts of how Iraqi forces were kicking our coalition asses. I was totally amused with this guy. It was kinda like waiting to see what laughter David Letterman's Top Ten List was going to bring.
Re:This just in... (Score:2)
Need a reminder why you didn't go into AI? (Score:2)
LCMs [logical computing machines: Turing's expression for Turing machines] can do anything that could be described as "rule of thumb" or "purely mechanical". (Turing 1948:7.)"
This is why you didn't go into the exciting field of AI. You didn't understand it, and needed Artificial Intelligence to figure it out for you.
Passed the test (Score:2, Funny)
people (Score:5, Interesting)
Re:people (Score:4, Insightful)
Re:people (Score:2, Funny)
Poon Turing Test (Score:5, Funny)
Turing estimated that in 50 years (year 2000), 70% of people shouldn't have been able to tell they're talking to a computer (which of course didn't happen).
Shit...give those geeks a month...
</joke>
Re:Poon Turing Test (Score:2)
I dunno, I think most of the comments on Slashdot could easily be generated by a bot. Simple fuzzy logic algorithms to determine if they should post pro or agains the company in question and collect a series of high-moderated comments to harvest information relational to the company so the next time the article gets posted they have plenty of content
The horror (Score:5, Funny)
My paranoid mind is imagining that I'm the only human on /. and that all the other posts are automatically generated by Slashcode. Fortunately for me some of the trolls are too imaginative to have been produced by a machine.
There are other ways to convince a judge... (Score:5, Funny)
User: DO YOU GIVE ORAL SEX?
Iniaes: No, I don't.
User: WHY DON'T YOU?
Iniaes: That feature was turned off due to abuse.
I think if the feature was turned back on, the bot might convince a judge or two.
The /. test (Score:5, Funny)
1 - rushs to be FP
2 - blames Microsoft (Microsoft related story or not)
3 - sing the virtues of OSS over PS if the story is about a security flaw in PS.
4 - sing the virtues of OSS over PS if the story is about a security flaw in OSS.
5 - post contains "In Soviet Russia"
6 - post contains "Imagine a beo..."
7 - post contains Microsoft/Sony/MPAA/RIAA/DRM/DMCA is evil.
If any of these are true, then the poster is definitely human. A computer would never be smart enough to show so much creativity and independant thought
Mind in computers? (Score:2)
AI vs. AS (Score:5, Insightful)
AI is not being able to have a conversation with your computer, AI is just algorithms -- computing the right answer to complex problems as quickly as possible.
What most people think of as AI is really Artificial Sentience, and the more I learn about computer hardware the more I realize that it will not happen on my PC.
AI = Alternative Intelligence (Score:2)
Re:AI vs. AS (Score:2)
Yeah, anyone who knows about computer hardware knows that sentience can never be achieved with tiny electrical impulses shooting around inside an object in response to external inputs.
Re:AI vs. AS (Score:2)
Our brains, ARIANABSIAWFBBH*, are highly parallel. Time-division multiplexing may simulate this, but no matter how fast CPUs become, an upperlimit on "parallelity" will be reached which is far less than what is attainable by even, sa
Re:AI vs. AS (Score:2)
Searle is a proof that this didn't work for everyone. His definition says that if you can define it, then it isn't intelligence. So the only way that he will experience an AI is if he actually is fooled. Over a long period of time. But even then he might not accept it. The flat earthers don't, despite sattelites and round the world
Re:AI vs. AS (Score:2)
AI is not being able to have a conversation with your computer, AI is just algorithms -- computing the right answer to complex problems as quickly as possible.
What most people think of as AI is really Artificial Sentience, and the more I learn about computer hardware the more I realize that it will not happen on my PC.
Learn more about co
Re:AI vs. AS (Score:2)
But Artificial Sentience would be another question entirely.
"Sentience" is a tricky word because it involves the capacity to feel, and I don't believe that computation alone can grant that capacity.
Strictly computational models of mind don't entail a phenomenological response -- that is, they work just as well descri
Re:AI vs. AS (Score:2)
A philosophy or method of inquiry based on the premise that reality consists of objects and events as they are perceived or understood in human consciousness and not of anything independent of human consciousness.
i.e. Reality is defined by our perception of it. That seems completely unrelated to the way that you use the word. What's your definition?
That's a sidebar. Back to the
Re:AI vs. AS (Score:2)
Well, by phenomenology I mean the mechanism or mechanisms by which the experiential phenomena of consciousness are created. A better introduction than I can give is given by Chalmers here [arizona.edu].
I think "sentience" is a tricky word because it is completely meaningless.
The language we use to talk about consciousness is notoriously inexact and ambiguous, but there is something I mean by sentience that is different than what I mean by intelligence. I think the Chalmers article does a dec
Re:AI vs. AS (Score:2)
His description of the hard problem in defining consciousness has very few concrete examples, and the few I could find seem worthless:
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in vis
Dr Fun (Score:4, Funny)
TWW
Wrong! (Score:2, Insightful)
Blockhead (Score:2, Interesting)
Re:Blockhead (Score:2)
and that if you built one, it doesn't actually think.
Consider this:
Flip a coin 1000 times, then ask Blockhead how many Heads there were.
To answer this question, Blockhead needs a tree of at least 2^1000 states.
So if Blockhead could exist, it would be larger than the universe.
If you release Blockhead from constraints of physical reality,
then it's relatively simple to give it memory.
(they "pointer" to the location in
Re:Blockhead (Score:2)
Ah, but it does matter. Approach it from the other side;
The contrapositive of the Turing test says that a machine that doesn't think, can't pass.
Block says "imagine a device that doesn't think, but passes the Turing test anyway."
if such a device could exist, it would certainly invalidate the Turing test.
But if the device can't exist, it proves nothing.
My 1000 coin flips example was a simp
Re:Blockhead (Score:2)
Not to mention that we have throughout history granted one another the presumption of intelligence based solely on one another's responses without X-raying or autopsying
the death of Alan Turing, and intolerance (Score:2, Informative)
Let me remind everybody that Alan Mathison Turing had an "accident", or committed suicide as many people believed, after having put through an humiliating process by his country's lack of concern for private life.
Alan Turing was gay. After being robbed by an one-night-stand encounter, he filed a complaint with the police. He was then prosecuted for being gay, and offered the choice between to prison, or undergoing hormone therapy to suppress his sexual instincts (female hormon
Birds and Airplanes (Score:3, Insightful)
I don't remember who, but someone published a great article in Scientific American that claimed the Turing Test has mis-guided the goals of artificial intelligence. He said, instead of trying to build a bird, let's try and build an airplane. Building AI that was truly human-like would be as useless as building a flying machine that was truly bird-like.
Re:Birds and Airplanes (Score:4, Insightful)
Re:Birds and Airplanes (Score:2)
Instead, try thinking of a computer as a "Universal Modeling Device". When you're simulating a tornado going over a landscape, a huge number of calculations are being done, and done quickly. But what do all those stacks, pointers, and variables become? A model of the tornado, imitating many of its most important f
Re:Birds and Airplanes (Score:2)
Re:Birds and Airplanes (Score:2)
Eliza was designed to show that the Turing test was not valid, because it was too easy. (It passed the informal version during it's first year, when someone got so angry at the computer they attempted to fire it.) But Eliza, et seq. don't even approach dealing with the real Turing Test.
Much work went into many problems that will eventually be incorporated to whatever program tries seriously to challenge the Turing Test, but
Turing SAT (Score:2)
;-)
Forthcomming Book: The Turing Test Source Book (Score:2)
Invited contributors include contributions from Andrew Hodges, Jon Agar, Noam Chomsky, Daniel Dennett, Stevan Harnad, Kenneth Ford, Douglas Hofstader, John R. Lucas, Roger Penrose, David Rumelhart, Selmer Bringsjord, Ned Block, David Chalmers, The Churchlands, Andy Clark, H. M. Collins, Jack Copeland, Huber
Brain simulation (Score:3, Interesting)
Even better, there has been progress reverse-engineering brain regions like some auditory or visiual -- giving us the actual algorithms the brain uses. Shouldn't work like that be enough?
P.S. A lot of arguments go like this: Computers use first order logic, we don't, so AI can't work. Haven't there been higher order logics implemented in software?
Did you ever notice... (Score:3, Insightful)
No, really.
Turing and Patentable Inventions (Score:2)
Another way to compare machine and human intelligence is in the design of complex, patentable systems.
from: http://www.genetic-programming.com/gpemcontrol.pd f [genetic-programming.com]
ELIZA (Score:4, Funny)
Eliza: Hello. I am ELIZA. How can I help you?
Me: Do you believe the Turing Test is sufficient to evaluate artificial intelligence?
Eliza: Oh, i believe the turing test is sufficient to evaluate artificial intelligence.
Figures she'd think that.
This is what you need... (Score:2)
Um, no it's not. (Score:2)
Frank Herbert's Turing Test... (Score:3, Insightful)
It is ironic, however, that a computer would pass the gom jabbar more readily than a homo sapiens. However, both tests start with an implicit principal assumption: A definition as to what a human is. Many of us here (not to single out
And now for something relevant... (Score:2)
One comment that I have about the entry is that it spends time criticising Turing's guesses as to when machines might be able to pass the Test. To me, that section of Turing's paper is just idle speculation that has nothing to do with the paper's central contentions.
Look, Turing proposed the 'test' as a joke (Score:3, Informative)
The issue was that many people at that time (and many today) seem to have a religious belief that thinking cannot be implemented in any way except with a human biological brain. Turing could clearly see that the human brain was a computational engine, and he of course defined the concept of a universal computer. Thus, it was obvious to him that you could build an artificial intelligence.
His "test" was really a way of gently pointing out the absurdity of the arguments of people like Searle (who came much later), who would blindly deny that a machine could ever think.
Turing's point was, to paraphrase "look, if I give you a machine which is indistinguishable in every respect from a human, which you can talk to in depth on any subject of the arts or sciences, and you *still* don't call that intelligence, then you are just so wedged that there is no point in talking about this anymore".
He would be saddened I think, and slightly disgusted, to see people twisting the whole purpose of his little thought experiment to argue for the kind of ignorance and transparently idiotic rhetoric of the kind that Searle and other "critics" of artificial intelligence try to make.
Re:Why? (Score:2)
Re:Why? (Score:5, Funny)
You: I think this is the wrong road to go down.
Eliza: Can you elaborate on that?
You: We should make computers better for people, but they should be made to actually think and reason.
Eliza: Why do you mention computers?
You: We have enough trouble doing this ourselves...
Eliza: Please go on.
You: if we have machines doing it they will surely out think us, and then what?
Eliza: What answer would please you the most?
Re:Why? (Score:4, Informative)
Along the same lines, the bots in the recent Chatter box challenge [chatterboxchallenge.com] show some improvements in the whole chatbot world, but some are just like the ol' Eliza
Re:Why? (Score:2)
> They say we only use 2% of our mental capacity,
oh man, did you set yourself up there.
Re:Why? (Score:3, Informative)
Perhaps "they" and you only use 2 percent (it must be shrinking! The *usual* wrong-assed estimate is 10%), but the rest [washington.edu] of us use all of our brains, just like any remotely reasonable organism. Now, if you had said that, on average, only 10% of our neurons are firing at any *one time*, it might have been a bit less ludicous. But such would probably be true of any complex cognitive system, i
Re:Why? (Score:2, Informative)
Although I didn't RTFA, I can say that the Turing test is pretty useless for determining machine intelligence.
I've argued over at Kurzweil AI [kurzweilai.net] and AI-forum.org [ai-forum.org] in several discussions for the need to analyze brain (biological or not) architecture to ultimately conclude if something is actually INTELLIGENT. The need for this comes from the many brute force and somewhat cleverly written chat bots like Alan [a-i.com] that attempt to appear intelligent.
I hope everyone here will check out these two forums because t
Re:Why? (Score:3, Funny)
Re:Why? (Score:5, Insightful)
You're assuming a premise, and we don't know that it's true. If computers can do what we do, then there's reason to believe that we may be able to build some that can do it better than us.
That said, we are nowhere close to building computers that do what we do. Our best models of cognition and language (which we believe to be central to our 'intelligence') fail miserably when we try to implement them on a large scale using computer systems. Even if it worked, there's no reason to believe it would be a "Terminator II" scenario. We can always quite literally pull the plug. It would be a miracle to create a computer with the intelligence of a mentally retarded child, so to entertain notions of a computer that suddenly becomes self aware and takes over everything (like Cartman's Trapper-Keeper) is rather fanciful.
Re:Why? (Score:2)
Read The Metamorphosis of Prime Intellect [kuro5hin.org] , which was mentioned here weeks ago. Great story, and shows how a machine can obtain and surpass human intelligence without the opportunity for us to "pull the plug."
That said, I believe "the singularity" is at least 5 years off, but no more than 20. If you take care of your body (perhaps even if not), it'll happen within your lifetime.
Re:Why? (Score:3, Insightful)
Don't you hate being spoon-fed?
Nope (Score:3, Interesting)
Check those articles about jwz's "review" or one of those distribution reviews. Count the number of +3/4/5 Insightful/Informative/Interesting posts that say Linux is a usability nightmare or is nothing compared to Windows XP or how it will never succeed on the desktop.
I can't even understand why someone modded you up. Talking about how Slashdot is pro-Linux anti-MS always makes someone get modded up, even though the exact opposite of
Re:Why? (Score:5, Funny)
For one, they will become so wired in to the network that they will immediately proceed to hunt you down as an obvious objector to their plans for global domination. Oh, and none of that 'there is no spoon' crap - that was patched last Friday.
Re:Why? (Score:2)
Re:Computer limitations... (Score:2, Offtopic)
These were beautiful machines, both in form and function! I have a TiBook now, and it is nice and fast, but it isn't as pretty in the visual sense or the tactile sense as my old powerbook.
I would make similar arguements for all the Macs that came out from the blue and white G3 until the windtunnel model.
I am not as familiar with PC forms, but I have also seen a few nice gaming set-ups, from companies like Alien.
Re:can someone.. (Score:2, Insightful)
Re:First Turing Response (Score:2)
Re:Bladerunner (Score:2)