Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Science Technology

Stephen Hawking On Genetic Engineering vs. AI 329

Pointing to this story on Ananova, bl968 writes: "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers, 'so that artificial brains contribute to human intelligence rather than opposing it.' His idea is that with artificial intelligence and computers, which increase their performance every 18 months, we face the real possibility of the enslavement of the human race." garren_bagley adds this link to a similar story on Yahoo!, unfortunately just as short. Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
This discussion has been archived. No new comments can be posted.

Stephen Hawking On Genetic Engineering vs. AI

Comments Filter:
  • Re:Enslavement? (Score:2, Informative)

    by Kwil ( 53679 ) on Saturday September 01, 2001 @06:29PM (#2243983)
    What's all this talk about enslavement? Hawking didn't mention that in either article. I don't follow how "take over the world" == "enslave the human race"

    It could just as easily mean destroy the human race, or it could simply mean to take control of the world, as in, computers running everything, leaving us humans to sit back on our asses and enjoy the fruits of their labours.

    Hell, humanity might become the equivalent of the computers' pets, and as far as I'm concerned, that's not a bad thing. All my cat does is eat sleep, and play - how often I wished I had that lifestyle.

  • Re:He should know. (Score:3, Informative)

    by Viadd ( 173388 ) on Saturday September 01, 2001 @07:05PM (#2244075)
    MP3s of Hawking are at
    M.C. Hawking's Crib []
    including tracks from "A Brief History of Rhyme" and singles such as "Why Won't Jesse Helms Just Hurry Up And Die? "
  • Re:Enslavement? (Score:2, Informative)

    by Steeltoe ( 98226 ) on Saturday September 01, 2001 @08:04PM (#2244155) Homepage
    And don't try telling me that you do things for other people because "it's the right thing to do" you fo them because doing so makes you feel good. However we look at it, everything that the majority of humanity ever does is selfish.

    Ego is what makes us separate (this is me, that is you, that is a chair - not me, etc), so it depends how much ego you have. Most people got buckets, but some got very little ego. Thus help others without so much regard of how good it makes them feel, but more because they identify themselves with others. Generally, the more you help others, the more you will identify with them. So it's a development progress. In conclusion, if being egoistic can help you start helping others, that's a good thing.

    A few years ago, I also bought into the "we humans do everything on the basis of selfishness". And while it's technically true, I don't think it speaks the whole truth anymore.

    - Steeltoe
  • by carlossch ( 459196 ) <carlossch@in[ ] ['f.u' in gap]> on Sunday September 02, 2001 @01:08AM (#2244722)

    How can something designed, programmed, and worked on hard by humans become better than the capacity of the human(s)' mind/intelligence that designed it?

    There are quite a few examples of endeavours in which the human mind designed things that outsmarted it. Although it is controversial to do it, you simply can't say that Deep Blue does not play chess better than any human that designed it.

    But the example I always like to give when such discussions are held is that of genetic programming []. Genetic programming is an area of evolutionary computation that tries to achieve automatic programming. It basically uses GA techniques to evolve programs. There are reported cases of results in which the program outsmarted human beings quite nicely. One great book in the subject is Evolutionary Design by Computers [], a collection of texts and papers in the subject, edited by Peter Bentley.

    All in all, most AI criticisms seem to degenerate in anthropocentric pseudo-arguments. Another good book to be read is Dreyfus' What computers (still) can't do []. Dreyfus gives good reasons for why AI may be far from the present, but does so without (for the most part, at least) resorting to the argument that "I'm human and want to be the only smart being here". It is interesting that AI criticism may be the last island of anthropocentrism. First, the Sun does not go around the earth, but otherwise. Then, me and that disgusting worm are made of the same genetic stuff. Now, a bunch of transistors beats me at chess and wants to think? Then again, this is just me.

    The links are here for the paranoid: ?theisbn=155860605X&vm= ?theisbn=0262540673&vm=

    Semper ubi sub ubi

"It ain't over until it's over." -- Casey Stengel