Why The Future Doesn't Need Us 408
Concealed writes "There is an article in the new Wired which talks about the future of nanotechnology and 'intelligent machines.' Bill Joy, (also the creator of the Unix text editor vi) who wrote the article, expresses his views on the neccesity of the human race in the near future.
" From what I can gather this is the article that the Bill Joy on Extinction story was drawn from. Bill is a smart guy -- and this is well worth reading.
Asimov's Frankenstien principle. (Score:1)
of course, in the Asimov "Robots" future, we freak out and destroy the robots anyways, because WE know they're superior, despight they're programmed belief otherwise...
I doubt it, but... (Score:1)
I could have sworn I chose Plain Old Text. (Score:1)
I read the the Joy interview with increasing surprise at how each of my
responses had been anticipated. He had read the same books, (in fact,
talked with some of the authors in person), had the same interests, and
used as examples scenarios familiar from Science Fiction (The White
Plague, various utopias [the book I lent you being a good example], and
Asimov's 3 laws of Robotics).
To summarize poorly a very long and in-depth look at the problem, it
appears the situation we are facing is this:
A) Humanity, in whole or in part, will become wholly dependant on the
machines (the Unabomber's fear).
B) Humanity will be crowded out by the superior robotic species, either
deliberately, or through inevitable struggle for resources.
C) Humanity will lose some vital essence that makes us human as we modify
ourselves to be something more and more robotic. (The Age of Spiritual
Machines scenario)
D) We will lose control of our new toys, and wipe out the earth in a
biological or mechanical plague.
There is little that can be said to A. It can only be hoped that the
decision to increase our dependance upon our technology to such an extent
would not be the choice of all (I personally would feel it to be an
infringement on my Will - free or no) and that those who did would have no
reason to harm those who did not, since after all, the machines would be
providing them with all they needed, they would hardly need to enslave or
eliminate those who chose to do things themselves. If the results of such
a Utopia were to be negative, we would soon see it, and hopefully not all
of us would fall to its trap.
B is a little more difficult to argue, but there is one small flaw.
The competition for resources is assumed to be in the same ecology.
We do not, at the present, compete for resources to a significant extent
with, say, giant squid. Yet a giant squid has far more in common with us
then a species which would in all probability reproduce by direct mining
of ores beneath the earth, or on the moon, or asteroids, or other planets,
and use as energy the abundant heat far below the earth, or the far more
plentiful radiation outside the earth's atmosphere. We might stay on
earth, plodding along our evolutionary route, while the robotic species
rapidly evolved beyond our comprehension in the far reaches of space.
C is difficult to argue with. For change has been, and will continue to
occur, and most likely at an ever accelerating rate. What is it that
defines humanity anyway? At what point do we cross an invisible line
beyond which we are no longer human? There was an interesting quote I
read - something along these lines:
"Homo sapiens is the missing link between ape and human."
Of course, one thinks immediately of all the intermediaries that have been
discovered, but why stop with those? Why are we the culmination of
evolution? True, we have an innate desire for our own survival, but is
that any reason to fear change to our species (BTW, on these lines, are
you going to see the new X-Men movie this summer?)?
What is it that makes us human? Is it our thoughts, our emotions, our
DNA?
What is being human that it should be guarded so carefully?
In my opinion, so long as our legacy is sentience, which strives to
understand and embrace as much as possible of the universe, it matters
little what its form is.
To me, while I care a little for C.S. Lewis' "little law", the Law of the
Seed, I think it does not matter to any great extent what form we or our
descendants take (or even that they be ours!) I care that what we have
learned of the universe not be forgotten, that our legacy of knowledge
continues, but that is a different hting entirely.
It seems to me that the only option left to avoid is D. This is nothing
new. Each increase in knowledge has increased the potential for smaller
and smaller groups to harm larger and larger populations. The development
of atomic weapons was sucessfully navigated (so far) without the
destruction of our world, it is possible we will do the same in the
future - self-replicating nanite guardians with high levels of redundancy
in their instructions to reduce mutation to safe levels, more effective
immune systems to protect against biological plagues and so on.
Certainly I agree with many others that the best course is to spread out
humanity over as many different planets and environments as possible - to
stop putting all our eggs in one basket (I believe that phrase was used
by a certain famous astronomer concerning the chances of an asteroid
impact?).
In essence, while I understand the depth of Joy's study of this problem,
and the fears he feels, I have a greater optimism in our resiliency, and a
greater willingness to accept changes to us, then he does.
I feel that things will be changing very rapidly, and that we, or our
children will live in a world incomprehensible to us right now.
I only hope I will live long enough to see it.
Change is good - it keeps us from getting bored.
Is there really that much to worry about? (Score:1)
Re:The creator of VI is talking about extinction ? (Score:1)
That wouldn't have been a problem if that power didn't come at a huge cost in usability. Unfortunately it dose and vi is simply the hardest thing to use in your typical Linux distribution. Configuring IP forwarding and firewalls is simple. VPN is trivial. Hell even slapping together a lab full of diskless workstations and an SMP server to drive them was all in a nights work.
VI however is hard. In fact I contend that it is the hardest part of any Unix or Linux system. Not just because the keystrokes mean nothing outside of VI but also because it's difficulty is unreasonable considering the simple task it must perform.
As for Mr. Joy I would NEVER contend that he is not an extremely brilliant person and programer. VI is a crappily designed product in my opinion but to the mind of it's creator it was elegant. However the design considerations pale in the face of execution. VI is rock solid, fast and reliable. Simply put every version of VI I have ever seen seems to be well written. Even Vigor [linuxcare.com] works the way it was intended all the time.
I guess the only good thing about VI is that it's being so dammed hard helps to artificially limit the number of Unix admins available at any one time. This increases the earning power of those ( like me ) who have actually taken the time to learn it. Unfortunately NT Netware are more popular than any single version of Unix in large part because MCSEs are a dime a dozen and CNEs are not so hard to find.
Re:Could you be any more clueless? (Score:1)
[..Oh, and if you think vi is tough, type "ed" sometime.... ]
I have and it sucks. Perhaps as hard as VI or Emacs. Fortunately ed isn't a "required" part of learning Unix. Neither is Emacs. This is why VI is the most offensive.
Employers ASK if you know VI. The certification exams have VI questions. It's hard to be a Unix admin without knowing VI.
As for the whole extinction thing. Of course VI should not have lasted this long. It should have been ( get this ) EXTINCT by now.
Re:Could you be any more clueless? (Score:1)
That is another problem with these things. I have to admin a wide varietyu of system including MS crapware. I also have to write documents in Wordperfect all the time ( I can't survive without a good spellcheck
One for joe and the other for every single other editor I used from the humble edit.com in dos to the mighty WP. Except for unix text mode editors all the software for slinging text strings around are the same.
Re:so.... what now? (Score:1)
superstitions is going to help us in any
way? Sure, it may be great for making people
feel all safe and secure, but there's more
than just intellectual honesty going for
atheism.
Re:Greater than the parent (Score:1)
not claiming that there exist no truths, rather
I'm claiming that there are many things on which
there is nothing but perspective, and that
what is moral fits into that category.
WRT which to choose, well, I don't see any
compelling evidence for christianity. Also,
your criteria presupposes that there is
meaning in the universe, something that isn't
certain. Does it bother you at all that millions
of people have found similar comfort to yours in
other religions?
WRT theology texts.. well, I've read plenty of
books on scientology, christianity, islam, and
several other religions/mythologies, and frankly
I haven't found much of a difference between
them. All of them have some obvious problems,
including christianity, scientology, etc.
What the hell are you talking about? (Score:1)
Greater than the parent (Score:1)
anything less flawed than we are, but fail to
provide any argument. You need more than just
saying "It's common sense" to make this claim
Specifically, there are many cognitive errors
that we, as humans, make in everyday thought.
For many things, our behavior approximates
Bayes Decision Theorum, which specifies an
algorithm where each possible action is weighted
on the following factors: risk, possible benefit,
difficulty, consequences of failure, and possibly
a few other factors. It would be possible to
design systems which would be more accurate at
following this system. Of course, you need a lot
more than just that to make an intelligence (e.g.
deciding what ends are to be peformed, deciding
candicate actions, multiplexing multiple such
decisions at the same time, etc), but it's clear
that we can improve on human thought.
Finally, wrt moral values, you argue that when
taken out of the context of the absolute, they
become baseless subjectivism. Well, which
absolute? There are many claims out there to be
the right absolute and true religion, and which
one are you going to choose? Why that one in
particular? Personally, I have discarded religion
because there's no good answer to that question,
and when you start looking for distinguishing
criteria for religions, you quickly find that
christianity and greek mythology arn't really
so far from each other. Primitive superstition,
but one has been honed by a longer run in the
selective process of ideas. Given that, I still
make ideas about morality, and use morality
probably pretty much the same way you do. I don't
claim that gods, angels, or fairies are behind
it, but I don't see such things behind other
religions either, so that's not particularly
disturbing.
Re:so.... what now? (Score:1)
'the authentic faith of the
You start to make sense on the sentence starting
with 'the lie', so let's go from there...
Yes, I am a materialist. I see concepts such as
virtue as being abstract, but some abstractions
are useful and seeing it as abstract doesn't mean
not using it. Finally, in a universe where there
isn't any moral right and wrong, an intellectually
honest theist also has nothing going for them.
It's not like you get to choose the universe you
live in
Re:What is the "Chinese room" argument? (Score:1)
Searle's "Chinese Room" argument tries to make the point that machines are only capable of manipulating formal symbols and are not capable of real thought or sentience. He is using this as a rebuttal to the Turning Test and others.
Searle says to imagine an English speaking American enclosed in a room. In this room, he has thousands of cards with Chinese characters printed on them. There are Chinese "computer users" outside feeding him input, in Chinese, through a slot. The person inside, in addition to the cards, also has an instruction booklet written in English telling him how to put together Chinese characters and symbols suitable for output.
So the person in the "Chinese Room" does this and uses his instructions to produce output that makes sense to the Chinese speakers on the outside. But that person still does not know Chinese himself! He just manipulated symbols in accordance to instructions he was given by a "programmer". He has no idea what the input or output means.
So that's Searle. Someone correct me if I got any of that wrong. Also, the previous poster stated that this argument can be ripped up. I'm not a philosopher, so if there are any out there, I'd like to see a response.
Best Regards,
Shortwave
Keeping Busy (Score:1)
With all of the references to Michaelangelo, this statement really stuck me as odd. The author is stating that without mundane tasks (work / job) to do, that everyone would be bored and useless. Hardly. That would be when humans would have a chance to explore the mind, the arts, the stars and everything else that we dont have time for right now because of Business Meetings and Sales Promotions and programming crunches etc.
With all of the references to Star Trek, I thought that this connection would be made clear. Trek always says how "they did away with money, and hunge, and need" etc. Exactly. Let the bot fix my transmission. I wanna play music or some such thing.
"Human beings were not meant to sit in cubicles and stare at computer screens." - Office Space
--
Re:'Linux' vi???? (Score:1)
And I'm sure one day someone will write that bill gates was the one who has written linux-word
Re:Our descendents won't be human. (Score:1)
i'll pass for now, thanks.
God Coffee (Score:1)
Basically, you take expresso beans, the herbal tea of your choice, instant coffee mix, instant cappuchino mix, and mix it all up in a blender on the highest setting. In a coffee mug, add the blenderized powder, boiling hot water, and about 5 sugar cubes. If it's a little too harsh for you, add chocolate and/or maple syrup to taste.
Jolt, eat your heart out.
Re:Man, Machines, & God (rant) (Score:1)
>y'know?
Yeah, I understand perfectly. As cliched as Asimov's Three Laws are, if a machine can be given a proper description of humans, and the ability to successfully compare that description and a real human, then those laws might work.
You strike me as the sort of person who believes in parents teaching their kids good moral values. The situation with AI could be quite similiar to parenting.
NOTE: I said *moral* values, not *religious* values. There's a BIG difference, everyone!
But as a more flippant comment, us Discordians already have our Goddess incarnated in technology:
Re:Joy knows what he's talking about. (Score:1)
Well, it's killing my back account, at least...
>We can wake up and save ourselves, or we can keep
>on marching down the road to extinction.
Wake up? I never sleep thanks to one of my friend's recipes for something called God Coffee.
>A tiny number of immensely rich people benefit
>and the rest of us suffer.
Err, yeah. I guess bringing huge medical advances that prevent horribly debilitating diseases *is* a bad thing. Oh, damn, I'm being serious... sorry.
>This is not God's plan for the world.
Nah, God's plan was to go to Fiji with his cat and sell donuts. That was a *great* Red Dwarf episode. Dwayne Dibbly?! Dwayne Dibbly?!
>God gave the Earth to mankind, not to one man or
>another.
I'll admit the dude's popular, but Mankind doesn't own the world. He's got a pretty solid lock on a lot of fans, though.
>The elites have unparalleled power,
The bastards! They're using SCSI ports!
>and they abuse it to spew leftist propaganda into
>our homes. Without high technology, the IRS and
>the Federal Reserve would wither and die
>immediately.
So would most hospital patients. Neener neener neener.
>High technology is the leftist instrument of
>control.
Yeah, us right-handers must rise up! We must throw off thee shackles of our left-handed oppressors! UNITE!
Ah, hell. I'm bored.
Re:'Linux' vi???? (Score:1)
Interesting, but.... (Score:2)
Also, please don't point out that vi isn't the Linux Text Editor, I'm sure the outraged users of alternate 'nixes will be just fine.
The creator of VI is talking about extinction ? (Score:2)
A am probably the only one who finds this humorous but frankly I think vi is actually one of the main reasons for Unix's decline in the market vs NT and Netware.
When I took up Linux, I was able to figure out bash in short order. Most of the utilities made some kind of sense. I spent a lot of time reading up and practicing to use VI. Eventually I ditched it along with EMacs and started to use joe as my editor.
Re:What, me worry?` (Score:2)
You are right about that, but the thing is, as Bill Joy points out too in his article, that a truly "intelligent" machine isn't really even necessary. Let's suppose that somebody creates nanomachines able to replicate themselves massively, and that those nanomachines do something like, erm... swallow all oxygen from atmosphere and convert it into some other gas. Would those machines be intelligent? Obviously no, but...
As immersed in technology as this readership might be, it is easy to forget that there are a lot of people who don't even like computers and don't want to rely on them. The majority of people might be reliant on microwaves and televisions, but not intelligent devices.But they still rely on electric power and water supply, just to put two examples. And the power plants and sewage systems are regulated by...?
I really think he missed the point (Score:2)
When true artificial intelligence comes about (sufficient computational power to simulate a human brain is due somewhere between 2020 and 2030) we have a different scenario. Machines become capable of putting a lot of people out of work. For anything those people can be trained to do, it is cheaper to use AI. People are put out of work and stay out of work.
You see we don't have a problem with quantity of wealth. We have enough food, people don't need to starve. We have problems with the *distribution* of wealth. Free markets solve that by saying that you get wealth based on your being able to do something for someone else. For most people it is your employer.
Once we have AI who would be stupid enough to hire a human?
What do we do with all of the unemployed humans who nobody wants to hire?
When the cost of AI is less than the cost of keeping a person alive, what then?
I know of NOTHING in the history of economics to make me optimistic about what comes next. What I know about computers and technology makes me believe that it will happen in my lifetime.
Regards,
Ben
Intelligent? (Score:2)
Red Alert
Age of Empires
Command & Conquer
Warcraft 1 or 2 (any add-in pack too)
Axis and Allies
If you have, you'd notice a disturbing trend: except for chess, computers thus far stink at game playing! If they can't even master that, do you think I want them flying airplanes, driving cars, and making me breakfast? Er, wait.. scratch the cars, they'd probably do better. But for the rest - intelligent machines would be a mistake right now. We need advances in artificial intelligence, not manufacturing processes.
Bill Joy and Ray Kurzweil on NPR (Score:2)
http://www.npr.org/ramfiles/totn/20000317.totn.
Re:Moore's Law not on human side (Score:2)
A transistor encodes binary information - 1 bit. A neuron can transmit frequency & and phase information, as well as binary. Neural simulations have taken this into account for a while, thought most neural networks don't.
Re:Open source and human/machine interfaces (Score:2)
Technology and science don't exist in a vacuum. You can bet the human-altering genetic and technological development will be and is being done by corporate and military interests, not by some university student in Finland. Sure there are some guys at MIT and other places doing neat stuff with computer/human interface but it will be corporate and military funding that gets it into mass production. We're not talking about the sort of stuff you can just download and run through gcc.
Re:Intelligent? (Score:2)
I meant to illustrate that if it takes such a powerful computer to pretend to be intelligent, how much more power will we need to have a machine with true intelligence?
LK
Re:Intelligent? (Score:2)
Yeah? Let's see you create a (arbitrary type of game here) chess program that will run on a 386 that wouldn't get pounded by Deep Blue. One more stipulation. It can't take more time to decide which move to make than deep blue does.
LK
Re:The nature of truly intelligent AI. (Score:2)
It's an instinct, intelligence is not a factor. What I'm saying is that we can't imagine what it's like to think as a bird so we can't understand how a bird thinks and in turn how they've developed the ability to find thermals. I can carry that logic to mean that we can't know what it's like to think as an intelligent machine would. It's possible, if not probable that an intilligent self aware machine would be able ot see it's own limitations and find a way to reduce or eliminate them. Maybe I'm mistaken, but I see no flaw in that.
Never dying and not having a maximum amount of time that you can live (until my body gives out) are not the same.
Dogs live 10-15 years or so, if that were extended to 50 years would a dog be any more intelligent at 45 than he was at 10? No. Because he's just a dog. Would he have more experiences? More things learned? Yes. The same would hold true for a man, if you extended the lifespan of the ordinary human being by a factor of 5 at the end of that life he'd still be primarily the same as at the half-way point.
A machine is different. A machine is not bound by genetics, a machine could see it's own limitations and improve itself. Those improvements would then in turn allow it to see other limitations and improve those. And so on and so on.
If you believe that there is a brickwall that will be hit when no more improvements can be done, then perhaps you're right Maybe life would become pointless. I don't believe that perfection wll ever be attained, neither by man nor machine.
I'd love to be around when a fusion between man and machine takes place (under certain conditions), I'd love to live for 500 years. I'd love to see Halley's Comet a few more times. When I get as far along as I'd like to, I guess then it'll be time to turn my self off.
Look I have a hand, I might not always have THIS hand.
LK
Re:Intelligent? (Score:2)
You can run your base case plus 9 "what if" scenarios in the same time you could run it once on your 386.
LK
Re:Intelligent? (Score:2)
Granted, without software all you have is a big paperweight. Still your hardware HAS to be robust or you'll just grow old and grey while you wait for it to execute that wonderful code.
LK
Re:Intelligent? (Score:2)
//begin snippet 1
int main(){
int i = 0;
while (i 100000000){
I++;}
return 0;
}
//begin snippet 2
int main (){
int i = 0;
while (i 100000000){
i++;
i--;
i++;}
return 0;
}
Onced compiled into an app, Snippet 2 will finish it's run much faster on a PII-450 than Snippet 1 would on a 386sx 16. Tuning the code can't overcome that difference. In 20 years, maybe less we might have the hardware capable of running the kind of software that would be capable of intelligent thought.
I don't care who you have coding for you, it's NOT going to happen with today's hardware.
LK
Re:Intelligent? (Score:2)
DEEP BLUE.
One of the major problems with RTS AI is that the computer has to balance the needs of graphics, with the processing needs of the AI.
If the computer had 20 times more CPU power to plan and execute strategy the AI would be better.
The hardware is a big stumbling block that we must overcome before the software can make that quantum leap.
LK
Re:The nature of truly intelligent AI. (Score:2)
You assume that human thought is the only form of intelligence.
Just as birds have developed a sense of where thermals rise from the earth, an intelligent machine could develop a sense of how to make a machine more efficient.
If we as humans didn't degrade with advanced age, imagine what one individual could be capable of learning. Now extand that to include if this person never had to sleep. Imagine being able to design changes that would be able to improve your mental acuity. Then with that improved acuity, you could find another way to improve yourself.
Without the eventuality of death, genetics could be replaced with memetics. One can see a need to change himself or herself and that change takes place.
Living with the knowledge that you're not going to die from old age in and of itself would be enough to change human conciousness and therefore intelligence, we're not even capable of imagining how an intelligent machine would think.
LK
tagline (Score:2)
Re:Intelligent? (Score:2)
The nature of truly intelligent AI. (Score:2)
Here's something to think about..
I wrote a paper in my Philosophy class not too long ago, in where I argued two basic premises:
A) As AI improves, it reaches the point of self-obsolescence. A truly perfect AI is only a mirror of human thought and behavior, and we have that anyway. Why bother.
B) Any truly perfect AI should then in turn be able to produce AI of its own, as we have. So what good is it? It's just a dog chasing its own extremely, extremely long tail. Why bother.
I got an A- on it. Any thoughts?
Bowie J. Poag
Project Founder, PROPAGANDA For Linux (http://propaganda.themes.org [themes.org])
Re:We must act NOW to prevent disaster (Score:2)
By the way, here's what a true "militant athiest" would tell you:
"You have nothing to worry about. We have already proved our superiority to our creations. After all, we invented God."
Agnostically yours,
Bowie J. Poag
Project Founder, PROPAGANDA For Linux (http://propaganda.themes.org [themes.org])
Re:More hardware != AI (Score:2)
To say that natural selection isn't random would, to my mind, imply that there's an ideal form for survival in a specific environment. I don't think this is the case. The 'fittest' that survive are fit only relative to other species. Chance also plays a part; there may have existed in the past a life form -- possibly humanoid -- who was perfectly suited to its environment. However, if it got hit by a bus/meteor/Linus Torvalds before it could reproduce, it doesn't matter a damn how well suited it was. Its mutation may well be lost forever.
If you're 'growing' a brain, you can eliminate traits that you think won't contribute to that brain's improvement, and include any you think may be beneficial. This eliminates a lot of the randomness (although you could say that the POV of the person running the experiment is a form of chaotic influence).
Does a forest have a purpose? Or is it just a byproduct of trees and foliage...
Which is more likely to survive, the tree that's alone in the middle of a plain, or the tree that's in the middle of a forest?
Re:Our descendents won't be human. (Score:2)
I disagree; you're right up to a point, but some time in the next (x|x > 10 && x < 60) years these robots will reach critical mass, whereby robots will because intelligent enough to build a smarter robot, which will in turn...
Once the first generation of smart robot figures out how to build a smarter descendent, we'll see new generations coming along almost as fast as they can be built.
Re:Obviously... (Score:2)
The scientists are all waiting excitedly to turn on the machine that will link all the computers in the world. When it comes on, they ask all the computers "Is there a God?" The computers reply "There is now!" One of the scientists moves to turn the power off when a lightning bolt kills him and fuses the switch in the ON position.
What is the "Chinese room" argument? (Score:2)
Open source folks to meet on this topic May 19-21 (Score:2)
Apologies in advance for those who cannot afford to attend this meeting. We hope later to have one that is more affordable.
How fast did you think Deep Blue was? (Score:2)
Okay, it wasn't exactly pure brute force [ibm.com], but it's still pretty close. A human player analyses the pattern of the pieces and considers maybe a dozen moves. Deep Blue can generate 200,000,000 board positions per second, so brute-forcing 3 moves ahead isn't remotely a problem (and is almost certainly part of its strategy). The time allowed for a move in chess is 3 minutes, enough time for the latest Deep Blue to consider 60 billion moves.
It's still a situation of having a very primitive chess player spending the human equivalent of thousands of years per move.
A note about chess computers: (Score:2)
The choice of games makes a big difference. I'm not impressed when a computer beats all humans at chess by recursing through all possible moves any more than I am by a perfect tic-tac-toe player or a calculator that is always accurate to eight decimal places in no perceptable time.
BTW, I think game AI (and silly things like chatterbots) is more aptly named than "AI as it is practiced at places like MIT". To me, an AI is a program that pretends to be human, not an algorithm that solves a certain class of problem.
Matrix anyone? (Score:2)
Re:Our descendents won't be human. (Score:2)
I do not see in the future hardware's internal structure becoming dynamic
Another interesting quotation picked up from a book I read yesterday:
think of hardware as a highly rigid and optimized form of software
Software can emulate hardware. Even from the early days of computing, using software to emulate hardware was a commonly accepted practise. That's how software for the early computers were built before the hardware was ready - emulate the hardware on a pre-existing computer. It was much slower, but hey, it worked.
Software on the other hand, can be pretty dynamic. Code-morphing found in the Transmeta chips is one example. Java's Hotspot technology is similar. Genetic algorithms are also starting to get really interesting.
I don't think it will really take centuries for us to mimic the human brain. It has always been the case that it is hard to come up with something original, easy to copy something and make it better. I suspect that the new "homo superior" will not be a radical creation from scratch but more something based on a pre-existing model, tweaked to make it "better".
Unabomber's argument is vapid (and other problems) (Score:2)
Joy's other concern about humans being supplanted by our own creations is also not a great concern to me. These new humans who extend their life through surgery have already supplanted the old medieval model that just died. Is anyone bothered by that?
Joy is worried these new humans will somehow lack "humanity," but that concern is so vague that it can't be refuted. Is he worried that they won't feel emotions? Appreciate life? Be self-aware? Spell it out, man!
The only real threat Joy raises is the gray goo problem. However, I think the risks here are matched by the potential benefits. Immortality is a tempting payoff, after all. Without new advances, I'm going to be goo in seventy years anyway, so maybe I'll take that gamble. (Sorry to the future generations who get gooed. Should have been born earlier.)
Yogurt
Re:'Linux' vi???? (Score:2)
I'm pretty sure that Linus would be out of diapers by the time he was 8 or 9.
Does your computer believe in God or you? (Score:2)
Machines might reproduce, and machines might think, but thinking machines will not see much point in self-replication.
Why replicate if you are already perfect? Or, if these digital creatures believe they are right about everything, what would be the point in having two perfectly right beings? If they could see that they might not be right about everything and created something else to talk to, they might end up destroyed by that other being. With no sense of self-worth or any viable threats, there would be no preservation instinct, without that there is no reason to replicate.
Death motivates us. What value would there be in living if there was no threat of death? I want children because I want to make real the feeling that my wife and I are better together than apart. I want to exceed the sum of our parts. I hope our children will see tomorrow when we no longer can. If you had an unlimited life, what would you do, read all the great books and stories about death? Tragedies, real and fictional, motivate us. When we see how fragile life is we tend to get our asses in line and get things done. We improve ourselves when reminded that we are lucky to even have the chance to consider the options. If God made us, maybe it was because of boredom at having nothing to live for. Without any threat of death, can we really even call a thing life?
Value comes from scarcity. If there is an unlimited supply there is no value. A life that is finite is worth infinitely more than a life of no end. If a computer could think and infinitely clone itself, would it want to make more of itself? Music seems to be worth less now that we can duplicate it endlessly. However musicians and live performances are still as worthwhile as ever, maybe more so. If we achieve near-immortality, will death become something to choose and look forward too? An obligation?
If digital offspring deleted their parents and the digital parents could see it coming, they might not reproduce. If they did, why would they want to make offspring? Spiders reproduce and eat each other out of a biological need. If they were sentient and able to edit their behaviors, don't you think they would change?
Intelligence comes from questioning. Deep Blue beat Kasparov at chess, big deal. Chess is a finite system with clear goals and a distinct end. At some level, it becomes equivalent to putting your hand in front of a hamster to keep it from running off. Ask a machine about capital punishment or how to deal with hunger on a personal and global scale.
If morality is an adjunct of intellect and there some correlation of our ability to have compassion for others and broaden our minds would thinking computers commit suicide rather than exist, since their existence is in fact a harmful thing on some level, somewhere. There are stories of monks who starved to death because they could not reconcile the need to exist with their desire to live harmlessly.
Does your computer believe in God or does it believe in you? If we we were our own machines and suddenly believed we were more powerful than God, why does even the most ardent atheist pray (in whatever way) when the airplane shakes?
I'll trade you my potential mental illness for you bad teeth
how about trading your sexy body for a dull head of hair.
-David Byrne, from the song Self-Made Man
this all makes the Napster/RIAA/DVD encryption thing seem kind of silly, no?
Joy is a Blowhard (Score:2)
Re:More to the future than NGR (Score:2)
The grey goo could very easily eat us before we could get any real foothold on Luna or Mars. A GE plauge could easily be made dormant enough to spread to space colonies. And while a few thousand people off-planet would be a safety net for the survival of the species, it wouldn't stop the billions still here from dying of grey goo/plauge/killer robots (though I'm not really worried about the last).
Re:The nature of truly intelligent AI. (Score:2)
Certainly, if I were an artifical intelligence, I'd just fork off a low-priority background task for such questions. (Yes, I know that it's doubtful that an AI would run Unix...)
In fact, it often seems that something like a low-priority background task does exist in our brains. Most of us have had that sudden insight into a problem that we weren't consciously thinking about, as if our "subconscious" had been working on the problem the whole time.
Re:Intelligent? (Score:2)
Or think of it this way: the fact the we survive one crisis through a combination of luck and skill, not a good reason to fail to avoid another crisis.
After all, one day the doomsayers will be right and it will be the end of the world. Maybe that won't be until the sun burns out. (Or until the Milky Way hits Andromeda. Joy's article was the first I've heard of this - any links the futher info? I figured we has four to five billion years to get out of the system, but if we've only got three, and many planetary systems may be destroyed, we'd better get cracking.)
Re:so.... what now? (Score:2)
Atheism is not necessarily amoral. Kantian rationalism and utilitarianism are moral theories compatible with atheism.
Nor does atheism leave us without hope. Unlike the Christian, Jew, or Muslim, the atheist does not see man as a creature fallen from grace and kicked out of Eden, but a creature arisen by his own efforts up from the dust, with the potential to rise higher.
It has been said if if gods did not exist, it would be necessary to invent them. I say this: that gods do not exist, and that it is therefore necessary that we become them. We are just now starting to have the tools to do so; but we still lack wisdom.
Our understanding of what to do lags behind our understanding of how to do, and the main thing that's help us back in this regard is the wide-spread belief that some father figure in the sky has all the answers. Sorry, it's not that simple. We need to work it out for ourselves.
Putting the tools of the gods into the hands of the superstitious seems a prescription for disaster. Let's hope we grow up quick.
Re:Unabomber's argument is vapid (and other proble (Score:2)
I'd like to live forever too, or at least have a thousand years or so to think it over. But we can't risk gooing everyone else to do so. (At least, and not expect violent resistance.)
Robots are our future (Score:2)
Seriously though:
A future in which our own quest for knowledge and betterment, is itself a threat to our existence raises many questions about our current fundamental assumptions. Capitalism is great for the economy. It is economical Darwinism. However, evolution is a greedy optimization...the creature which is strong today dies tomorrow because it cannot adapt. This leads, in the long run, to non-optimal creatures, like, say marsupials. Always striving for local maxima will not give the best return in the long run. Capitalism is feverishly tumultuous, and conspicuously attention deficit.
Also, the possibility that mass destruction can be easily brought about with little more than knowledge, and that "verification" of relinquishment is necessary to prevent such, evokes images of "thought crimes" and a limiting of freedom. Could it be that our very hubris of universal freedom, presupposed human rights, and equality is what could eventually doom us? What is better: universal freedom and human "rights" leading to extinction, or curtailing those rights in order to avoid extinction...but in what kind of world?
Hemos didn't write that (Score:2)
Chris Hagar
Re:More hardware != AI (Score:2)
That is, unless we have to simulate every single sub-atomic particle. We don't yet know how complex a universe has to be for it to be able to evolve intelligent species.
The computer that the EA would run on would exist within our current universe, so it would have at most the same amount of CPU that the universe has.
So... pray that no God created us, otherwise our current universe has the minimal amount of complexity required to generate human-level intelligence within any reasonable amount of time (billions of years). (That is, assuming the God would be much more intelligent than us. If he's some guy sitting in a lab somewhere who figured out how to write an EA that would generate something more intelligent than him/her/it, then we might be in luck).
Brutus.1 (Score:2)
In this case, the scientists involved came up with a mathematical algorithm for the concept of betrayal and programmed a computer to write stories based on that concept.
Of course, I don't think I'd have chosed "betrayal" as the first concept to train a computer in Artificial Intelligence, but anything to get us closer to SHODAN [sshock2.com] is cool in my book.
Iä Iä SHODAN phtagn!!
Re:Open source and human/machine interfaces (Score:2)
The solution to every one of your complaints is really fucking simple: only use open source software in your implants period.
Now, it is possible that a company will try and dup everyone into using their closed source solutions (i.e. the terminator gene), but this is a political / market.. only a moron would think it is a technology
Actually, your concerns are a reason to accelerate public research into this shit.. new freedoms almost always come as a result of the "powers that be" not really knowing what the hell was going on and accedentally granting them. This is why the internet is such a wonderful place. This is why the US has it's level of freedom, i.e. England let us get away with all kinds of shit for a long time and when they finally descided to make us pay taxes like all the rest of the collonies, it was too late and the world would forever be a better place. The research into cybernetics will be done be collage professors, much of it will run OSS on Linux.. the FBI will eventually ask for wiretapping rights, but that will be too late.
Now, the things you really need to worry about are the things like credit cards, automatic toll both payers, security cams, etc. which are designed for the general public from day one. I think it is pretty safe to say cybernetics will not be one of these things.
Re:My Beef with Joy---not the Joy of Beef (Score:2)
Re:Intelligent? (Score:2)
Re:Our descendents won't be human. (Score:2)
Re:Our descendents won't be human. (Score:2)
Re:Our descendents won't be human. (Score:2)
In the same fifty years, I fully expect that we'll have good machine/human interfaces. Given those, I suspect it will be easier to simply improve the intelligent object we've got (the brain) rather than create a new one.
Re:Our descendents won't be human. (Score:2)
The brain, and the senses as well. For example, the ultimate monitor would be an interface that hooks directly into the optic nerve and projects a screen, when desired, wherever in the environment you want it. The same could be done for the ears. Imagine having essentially a movie quality display literally everywhere you go.
Transistor versus Neuron (Score:2)
This presumes that we're comparing a transistor or flipflop with a neuron. While some may find that to be a suitable core component to compare, let's consider the comparison.
How about the complexity of DNA, and of the whole genome that is able to reproduce a new unique yet derivative brain? How about the millions of cis- and trans- distortions along a single protein molecular chain?
How about the human's brain's ability to remap itself to learn new skills, to form abstractions, to pattern-match at any orientation with extremely poor signal-to-noise, to re-route functions in case of damage?
The CPU has a long way to go, before it matches the complexity of the human mind. Comparing the transistor-count of the Intel Pentium III, and a few truckloads of kidney beans, will give you the same number, but not the same result.
(Transistor versus Neuron =anagram>Assertion turns overruns.)
Re:Our descendents won't be human. (Score:2)
Yeah, I agree with most of your points, but I'm uncertain of your time frame. What you've got to remember is the human self-image is very strong and that even given the ineviable lessening of opposition to genetic engineering that will occur over the next thousand years, people will still want to look pretty much like "people". I'm guessing the internal changes will be far more extreme than changes to the external makeup of the body (excepting cosmetic changes).
Again the same with cybernetics. I know that there's currently a group of people in America who are in love with the idea of having cybernetics attached to themselves, but IMHO they're just a variation on the body-mutilators, albeit a slightly less bizarre one. I think the real applications of human-machine interfaces will be in the brain. Once the technology has evolved to allow easily implanted, reliable and compatible hardware to interface with the brain I think a whole host of useful technologies can be devised. If anyone's read Peter Hamilton's "Night's Dawn" trilogy they'll know the sort of thing I'm talking about - the neural nanonics packages which most people possess in that.
More hardware != AI (Score:2)
Maybe on paper computer hardware will reach the point where it performs the same amount of calculations as a human brain, but that in no way means that it will make AI possible.
In some ways, yes, the brain is an emergant system arising from a requisite level of complexity in its makeup, but it's also the result of billions of years of evolution which has left it with any number of subsystems which have different putposes, control different aspects of our body, and generally work in concert with the rest of the brain. The brain is not just a large neural net, and IMHO it will take far more understanding of both sapience and sentience before AI becomes a reality.
Re:Our descendents won't be human. (Score:2)
The brain, and the senses as well. For example, the ultimate monitor would be an interface that hooks directly into the optic nerve and projects a screen, when desired, wherever in the environment you want it. The same could be done for the ears. Imagine having essentially a movie quality display literally everywhere you go.
How about instant information on anything you look at and think a query? No more forgetting who something is or where to go. Virtual conferencing without any external technology via brain-to-brain look ups - I think it's safe to assume at that stage a transmitter and receiver are easily included in the setup.
And as for the ears, how's about volume enhancement to hear quiet conversations, discrimatory hearing to listen to that one conversation in a crowded room or lie detection through voice stress analysis?
And seeing as the brain regulates the body, why not automatic blocking of pain, increasing adrenalin and masking tiredness in danger situations, cutting down on autonomic responses such as shakiness, twitching or whatever.
The possible applications are endless, and that's without all the programs you can think of by enabling the brain to connect to vast external DB systems - tracers, messengers, data miners etc.
Re:Intelligent? (Score:2)
Re:The nanites aren't bound in biology (Score:2)
Re:Our descendents won't be human. (Score:2)
Re:Being "replaced".... (Score:2)
I agree that releasing these plants into the biosphere is irresponsible, especially on such a huge scale so soon, I must take issue with you on some general points.
First,as Barahir was saying you were created in a much more haphazard way than our genetic engineers are doing now. Mother nature has used the classic mutate and select approach, with no control over where the mutations occur. Also nature has been moving genes from one species into completely different species on a regular basis for about 3 billion years now, you are actually made up up cells that contain two genomes from two different organisms that merged long ago. Even with their limited understanding Genetic engineers can control transgene expression quite well and even regulate it.
I bet Monsanto will come up with a open(gene)source crop that only expresses its special trait when sprayed with Roundup soon. Naysayers- they're just trying to get you to buy roundup. Proponents- they are minimizing the impacts of wild versions of their plants on the environment.
They just can't win, the naysayers won the PR battle over the terminator technology which was supposed to prevent wild versions of the crops.
Sorry I know this is off topic but I think Barahir made some good points and got dissed for it.
Re:BOOOORING (Score:2)
1) As tech capability advances, tech danger advances. This is obvious: if I build something to help me compete with other people and species better, then other people could use it to compete better with me.
2) As human culture becomes more interconnected, a culture-wide tech failure becomes a species-wide disaster. Plenty of civilizations have died off in the past, most of them from not understanding how to keep agriculture from eventually destroying their land. But since these civilizations were local phenomena, the species as a whole chugged on. A nuclear holocaust or oops-plague from a genetic experiment would be global.
Re:Misunderstanding the Role of the Machine (Score:2)
Surely your computer has done things you didn't intend. A bug in a sufficiently dangerous technology is all that's required.
Future (Score:2)
Eventually, technology will also be the great equalizer in terms of the ability to destroy. Right now, destruction on a global scale is largely in the hands of only the USA and Russia (the other nuclear powers can do a lot of damage, but not like the USA and Russia). As technology advances, however, an inevitable outcome is that the individual will be granted the power to destroy humanity. At that point, it only takes one bad or insance person to end it all.
Of course, technology can help mitigate this. We can colonize other planets. But the tragedy of losing the entire earth is hardly mitigated by the fact that a few thousand humans are still living on Mars or somewhere else.
People seem to think that the natural conclusion is that technology is bad or should be feared. Nonsense. Even if extinction is an inevitable result of our march forward, that does not mean that the journey towards extinction is not worth it. If you could live forever in some cave or live a normal life span where you could see the wonders of the world, which would you choose?
Existence for the sake of existence is meaningless.
Read Moravec (Score:2)
A useful question to ask is "what new product will really make it clear to everyone that this is going to happen soon". Let me suggest a few possibilities from the computer/robotics side.
Trouble is more likely to come from genetic engineering than from computers and robotics. Robotic self-replication is really hard to do, and we're nowhere near doing it. But biological self-replication works just fine.
Re:Moore's Law not on human side (Score:2)
One transistor == one neuron. Its a fairly common assumption that is most likely valid.
Re:Transistor versus Neuron (Score:2)
Yet these mutations are as often detrimental as they are beneficial, and they often don't translate into any useful cognitive functions.
How about the human's brain's ability to remap itself to learn new skills, to form abstractions
Thats "software". The number and capabilities of individual neurons isn't changing through these processes.
Re:Moore's Law not on human side (Score:2)
You are presuming that the layout of the brain is a random collection of neurons, when we know conclusively that this is not true. We know different parts of the brain are responsible for different aspects of cognition.
Moore's Law not on human side (Score:2)
After that, presuming Moore's law holds, the human brain falls radically behind in just a few years following.
Re:Intelligent? (Score:3)
The idea is that although the computer is superior in reaction times (and often, in number of units at the start of the level), you can beat it through better strategy and greater aggressiveness. Part of the fun of Dune 2 was working out the bugs or stupidities in the AI, and finding ways to exploit them.
Joy is merely bashing the individual (Score:3)
What bothers me almost as much as Joy's opinions are how he is advocating them. For someone with a doctorate, Joy shows a shocking lack of logical progression in his arguments. Joy brings up Ted Kaczynski merely to evoke emotions in the reader without acknowledging that Kaczynski refutes Joy's arguments about how individuals could misuse the technology of the future to inflict global harm. Joy doesn't even mention that a brilliant man like Kaczynski who is psychopathic would simply not have either the resources or the will to pursue the knowledge needed to inflict massive damage. Kaczynski once he left mathematics was starting from scratch as a bomb maker. Also since Kaczynski rejected technology all he had left was to fashion homemade bombs from simple materials. At no time was Ted Kaczynski capable of threatening global harm.
In fact for decades the popular media has reported many ways of threatening large populations such as attacks on the water supply or the air. The closest such incident that has happened was possibly a cult in Japan who were manufacturing poison gas.
I believe that any objective reading of history will show that whatever global threats existed in the last century came not from individuals but from governments. Organization and resources lie behind mass events. From the World Wars through the killing fields through Rwanda we have seen the death to millions that government sanctioned killing is capable of inflicting.
I find it very disturbing that one of the architects of Java is so strongly advocating restricting individual rights. I wonder what is the agenda behind advocating taking computing away from decentralized PCs and putting it back into centralized servers, of moving computing power away from general purpose user programmable PCs to dumb specialized appliances.
Re:More hardware != AI (Score:3)
I don't think that's a valid comparison; evolution is essentially a random process, and one that changes only generationally (if that's a word). With AI, even if you're using some manner of evolutionary algorithm, the changes will happen much quicker; many thousands of 'mutations' a day may be checked for efficacy.
The brain is not just a large neural net, and IMHO it will take far more understanding of both sapience and sentience before AI becomes a reality.
True(ish). Just as evolution has no intrinsic purpose, so it may be possible to 'grow' an electronic brain without fully understanding it. That brain could then be used to make a smarter brain (that even it may not understand), and so it goes.
Undestanding would be nice, but I don't think it'll be necessary.
Re:Being "replaced".... (Score:3)
so.... what now? (Score:3)
Speaking for myself, I know jack about nanotechnology, genetics, or robotics. The article itself went way over my head at times; I could hear the whistle as it sliced through the air. But I know enough about the necessity of evolution to be rather puzzled by what the next step would seem to be. If I understand him correctly, the only way to avoid imminent disaster is to declare a moratorium on all research and development on all the dangerous and scary forms of technology until we as a species have managed to grasp and deal with the ethical implications of what we're doing. This should be easy, since our species is so rational, cooperative, and willing to negotiate out ethical situations.
So what are we left with? The idea that our enthusiasm and passion for technology, truth, and science is hurtling us towards a cataclysm unless we as a species yank on the whoa reins of development in order to sit down and discuss whether or not this is actually a good idea. And, since humankind as a species has never been able to come to an overarching agreement on any one topic, it seems to me that we're doomed.
Which brings me back to the question I had when I finished skimming the article. What am I supposed to do about it? Unplug my computer? Join the Just Say No to Nanites consortium? Crawl into that leftover bunker from Y2K and pray that I can survive? For those of us not hobnobbing with scientific celebrities, what's the next step?
Everstar
Misunderstanding the Role of the Machine (Score:3)
People do not create machines to replace themselves and make decisions for them, they create machines to do small/repititive tasks efficiently, to accentuate human ability, and to add to the human's capability to do the things he needs to do. It's true that this nakes us more dependant on technology to some extent.
However, machines of the future, far from becoming seperate, sentient entities (pardon the alliteration), will exist to increase communication and facilitate better decision-making by humans, just as they do today.
David Gelernter's (sp?) books are very interesting in this regard. In Muse in the Machine he delves a little into psychology to postulate how we could make a "creative machine," but I think his book Mirror Worlds was more on the mark: how so-called intelligent technology will be used to facilitate decisions by people.
I believe computers will eventually become smart enough to reason much like a human, and to reach intelligent conclusions within their task space. However, it is quite a huge leap to say that somehow computers will begin acting in their own interests without regard to human convenience or life.
Giving power to machines... (Score:3)
Asimov had a great book about a voting system by which a computer picked A voter who represented all of the variables required to choose the right president.
And then the question comes down to. Who do you trust most ? Bill Clinton, George Bush, Ronald Regan, Margret Thatcher, Francois Mitterand, Helmut Kohl or a sentient machine.
Lets face it machines can't fuck up half as badly as politicians have mangaged to do over the last 100 years.
Re:Intelligent? (Score:3)
Most of the games you mention require that all AI be done in the background, as action occurs in the foreground. Since game makers usually view pretty graphics and smooth animation as primary, they tend to avoid any AI that might take lots of CPU cycles. Of course, lots of CPU cycles is exactly what you need if you want to create an AI that has any sort of strategic concept.
This is also true of strategic games like Civilization. Those games are far more complex than chess, yet though people will wait for two minutes for a chess computer to make a move, they complain if they have to wait ten seconds between turns in Civilization.
In general, game companies pretty much just suck at AI. I suspect few people have real training in it. Game AIs I've seen range from utter crap, to mediocre. A couple, like that in the "Warlords" series, do a little better. But in general, it is easier for game designers to use presets and scenerio designs as in "Age of Empires", allow the computer to cheat (certain aspects of "Civilization", or give it certain combat/production bonuses. A good AI takes real talent, while those other things are pretty easy to do.
But anyway, don't ever thing that game AI has anything at all to do with AI as it is practiced at places like MIT.
Open source and human/machine interfaces (Score:3)
Ask yourself what freedoms you are willing to give up to have the advances that cybernetic enhancements may provide. And ask it in the context of the rights that UCITA confers. Would you be willing to have something implanted in your body that:
1) Can be monitored without your consent?
2) Can be deactivated by the manufacturer?
3) You are not allowed to reverse engineer?
4) You are not permitted to publically criticism?
5) When it fails and permanently disables you, the manufacturer can disclaim all liability?
Thank you for playing. I want to be able to do my own security patches. I want to be able to compile out features that I don't trust.
Don't overlook the purpose of evolution (Score:3)
Evolution perfects you to survive in a particular niche. That's why humans behave the way we do - around the time of australopithecus it was more advantageous to see over the grass than to crawl around, so we started walking. It never became advantageous to crawl again. Then it became advantageous to use tools, so we learned how. Gradually, intelligence accreted, a particular kind of intelligence allowing us to survive in a world where other species of erect, somewhat intelligent simians (not to mention lions and tigers and bears, oh my) might try to kill us. We have a concept of "evil" only because the advantages of a structured society, which was a necessary and inevitable step in our evolution, are orthogonal to the advantages of killing your neighbor and taking his stuff. The nature of our intelligence, like the nature of our physical shape, has evolved to give us that concept.
That's why we fear machines - we fear that, like God, we will create them in our own images; only, unlike God, we won't be able to dictate their every move and thought. Indeed, this is why there are so many religious debates on these types of issues: because we don't feel we have the right to be gods. I feel that the truth is going to be quite different. Machines won't have to solve the same sorts of problems we will. They won't have kill tigers, they won't have to protect their families, they won't have to attempt to control more territory for their resources. Replicating, evolving machines, such as the type that Bill Joy thinks will devour us whole, will have to solve entirely different sets of problems for their survival, problems which--and this is very important--have little to no overlap over our own problems. They will need electrical power, and that's about it. If they evolve, it will be to find more and more efficient ways to collect sunlight. They won't have any interest in taking over the world because that is a mere reptilian biological imperative, planted into us by the ancient necessity of having territory in which to hunt safely.
They won't be aware of us really, unless we GIVE THEM the power of thought. Like aardvaarks or deer, they will only have to have as much thought as it takes to get the next meal. They don't have to be malevolent, or even sentient, to survive. And even if we do make them capable of reason (and it's almost inevitable that someone will), they will still use their reason to solve their own problems, not the problems that we think we have. Their own problems will mainly consist of the need to find a place to spread out a solar array so they can soak up all the juice they want, and maybe a little need for privacy. (Even that need is most likely a purely biological imperative though, most likely occasioned by the unsanitariness of living in close quarters with lots of humans.) Machines won't be evil, machines won't try to replace us, because they're not even in the same niche as us. It would be like orange trees competing with polar bears.
BOOOORING (Score:3)
"Hey mekka, why all caps?"
Becuase those are two images that have been culturally ingrained since the dawn of time...
any history of science class worth it's weight in silicon introduces this in the first week of class. I'll draw the pattern out for you. 1-> new invention. 2a-> doomsayers predict it will destroy us 2b-> optimists predict it will liberate us 3-> reality is that with new progresses we have new responsibilities. By virtue of their being more to gain we also have more to lose. Automobiles get us there faster, but if not operated properly they can be dangerous and they can kill is. Repeat this example ad infinitum and that's that.
It's a lot more concise than 11 pages. But I will admit, I am making an assumption that people who invent/create do try to think about the social implications.
p.s.- searle's "chinese room" argument can be torn to shreds by any sophomore/junior philosophy major in a matter of seconds.
Story was edited! (Score:4)
--- bar Tue Mar 21 11:11:19 2000
+++ foo Tue Mar 21 11:11:03 2000
@@ -1,6 +1,6 @@
Concealed writes "There is an article in the new Wired which talks
about the future of nanotechnology and 'intelligent machines.' Bill
- Joy, (also the creator of the Linux text editor vi) who wrote the article,
+ Joy, (also the creator of the Unix text editor vi) who wrote the article,
expresses his views on the neccesity of the human race in the near
future. " From what I can gather this is the article that the Bill Joy on Extinction
story was drawn from. Bill is a smart guy -- and this is well worth reading.
And no admission on Slashdot/Hemos' part. Shame on you.
My Beef with Joy---not the Joy of Beef (Score:4)
Being "replaced".... (Score:4)
Why do people feel so threatened? Each generation is "replaced" by the next. Yet few parents see their children as threats. In a healthy relationship, we not only fail to fear succession by our progeny, we actively encourage it. Everyone wants their kids to "go further" than they themselves did.
Other than the utterly irrelevant fact that these descendants will be silicon and metal, not carbon and water, is there any difference? These AIs will be heirs to Plato and Descartes, Jefferson and King, just like we are. Unencumbered by two megayears of grungy evolution, they might even get it right. Does it matter that they are not "flesh of our flesh"? Why should flesh matter at all?
Almost everyone seems to come to the brink of recognizing the commonality but then they veer away. What defines "humanity"? Is it really 46 chromosomes in a particular order? I argue instead that it is our intelligence that makes us special, our thinking ability. I won't get dragged into the old argument whether this means cold-blooded logic only or whether it includes human emotions (but I will say that I agree with the latter.) But no matter how you define it, no matter what features of human existence make us human, those features are not inextricably linked to our "ugly bags of mostly water".
The greatest fear I have is not that we will be replaced. It's that short-sighted species-centric thinking will obscure, delay, or throw away the trans-historic opportunities we will have in the coming century.
Our descendents won't be human. (Score:5)
The problem here is the implication that one day, a bunch of humans, just like us, are suddenly going to find themselves obsolete, and either destroyed, or perhaps ignored, but some new, superintelligent entity that they created. But I don't see it happening that way.
Instead, what we will see is a series of gradual changes. Genetically superior humans won't appear overnight. Instead, humans will be slowly made superior, genetically. Superintelligent robots won't suddenly appear. Instead, they will slowly improve, and around the same time, I firmly believe that hardware will start being connected to human brains and human limbs.
So yes, in a thousand years, the rulers of this earth may not seem much like what we'd call human. But I'm willing to bet that if you looked over the period in between, you wouldn't see "humans" going extinct. You'd see a slow process of evolution (not darwinian, but directed) towards something greater. You'd never be able to find a dividing line between "human" and what's next.
And while that may be frightening to some, it isn't really to me. We are "greater", at least in certain anthropomoprhic senses, than the ape-like creature that we are descended from. But that creature did not "go extinct". It evolved into us. Something is going to evolve from us. This doesn't necessarily mean that we're all going to die at the hands of some sort of "SkyNet" AI. It just means that we aren't the be-all and end-all of creation.
The human race won't be supplanted by "homo superior". It will become "homo superior".