Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science Technology

Stephen Hawking On Genetic Engineering vs. AI 329

Pointing to this story on Ananova, bl968 writes: "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers, 'so that artificial brains contribute to human intelligence rather than opposing it.' His idea is that with artificial intelligence and computers, which increase their performance every 18 months, we face the real possibility of the enslavement of the human race." garren_bagley adds this link to a similar story on Yahoo!, unfortunately just as short. Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
This discussion has been archived. No new comments can be posted.

Stephen Hawking On Genetic Engineering vs. AI

Comments Filter:
  • Congrats, Stephan... (Score:0, Interesting)

    by glenebob ( 414078 ) on Saturday September 01, 2001 @06:04PM (#2243924)
    ...you have just corrupted the Borg.

    Enslavement, huh... Now how does giving a computer direct control over your intelligence provide less of a chance of enslavement than AI? You could be a slave and not even know it.
  • by Saint Aardvark ( 159009 ) on Saturday September 01, 2001 @06:28PM (#2243980) Homepage Journal
    Or has this already been done?
  • Am I the only one? (Score:5, Interesting)

    by Dave Rickey ( 229333 ) on Saturday September 01, 2001 @06:32PM (#2243990)
    Am I the only person who looks at things like the new displays with laser projection onto the retina and immediately starts wishing he could buy a pair of glasses that would be a cross between Geordi Laforge vision (360 degree wraparound, with infra-red and light-amp enhancement, just for starters) and holo-projection of computer interfaces? In no more than 5 years, you'll be able to buy hardware like that (all the pieces exist, and they just need a little shrinking to be viable).

    That's the ultimate projection of "Weak" cyborging, just a more advanced version of the optical aids I've had to wear since I was a child in order to have normal visual acuity. And frankly, the idea of taking the first step past that to "Strong" cyborging (the same thing, but wired to my optic nerve instead) doesn't bother me much. Nor does the idea of having a direct link of some sort to do math problems for me (just removing all the clunky limitations of a calculator).

    In fact, I don't start getting uncomfortable about the idea of cyborging myself until we're talking about storing "memory" in there. Having a perfect recall of every line of code I've ever seen would be handy, but do I want to save a text conversion (or even full audio/video) of every conversation I ever had? Actually, probably I would, if I could, although I'd feel cautious at first.

    I *want* to be a cyborg, in truth. My only bitch about the coming man-machine interfaces is that it's unlikely they'll find a way to turn my physical body into a disposable peripheral before it wears out on me. Why not? How is it any less natural to store a memory of what I see in silicon that I keep internally than to keep it on videotape? Give me a perfect memory, the ability to solve any mathematical problem I can define "in my head", the ability to "see" everything around me, or even tele-project my perceptions. I'll take all of it, and love it.

    When will I cross the line from being a human using artificial aid to being a machine with biological components? Ask me in about 30 years. Maybe I'll still consider the question worth answering

    --Dave Rickey

  • Re:Enslavement? (Score:2, Interesting)

    by uchian ( 454825 ) on Saturday September 01, 2001 @06:39PM (#2244008) Homepage
    The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid

    Unfortunateley, if you where to direct someone to do what is best for themselves, you would get a slave system - you see, it's this human trait called selfishness which is why the rich don't see why they should give to the poor, and why your everyday person doesn't give money to begging homeless people. Because it doesn't help number one.

    Thing is, most people look after themselves - the only time they look after other people is when it is in there own interests to do so, either because it makes them feel bad to think they haven;t, or becase they expect to gain from it in the long run - human nature's like that, you see.

    There is no reason whatsoever why computers shouldn't be any different. They are programmed by us, so they will be like us unless either a) we don't understand them enough to program them with what happens to be the majority of humanities values, or b), we make them so intelligent that they see our values for the self obsessed values that they are, and choose to ignore them.

    And don't try telling me that you do things for other people because "it's the right thing to do" you fo them because doing so makes you feel good. However we look at it, everything that the majority of humanity ever does is selfish.
  • by Ezubaric ( 464724 ) on Saturday September 01, 2001 @06:42PM (#2244017) Homepage
    Apart from my desire to help mad scientists everywhere achieve their dreams, one of the major reasons I've taken the unpopular stance of encouraging genetic engineering is that, without artificial correction, we have stopped natural selection from working.

    I agree with the need for society to provide safety nets for those who are less fortunate, but in our altruistic desire not to let people die, we have prevented less effective genotypes from leaving the gene pool. Moreover, those who are most well adapted, at least by our capitalistic socio-economic principles, tend to reproduce less often to prevent dilution of their money via inheritance - the true arbiter of success today (rather than genes).

    In short, genetic engineering would allow the human race to progress much faster than it would normally - we don't have lines of women waiting to mate with the smartest and successful men (talking about the intersection, not the union - rich and stupid people breed enough). This is not a war against humans versus machines or morloks vs. eloi, but merely a reasonable means to continue "improving" the human race.
  • by PineGreen ( 446635 ) on Saturday September 01, 2001 @06:47PM (#2244030) Homepage

    Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.


    Not really... Hawking is a scientific celeberity, which does not neccessarily meam that he is a good scientist, nor does it mean that he can speak about other fields of human endeavour.
  • by Daniel Dvorkin ( 106857 ) on Saturday September 01, 2001 @06:53PM (#2244043) Homepage Journal
    "Now it does not suprise me one bit that Hawking would come up with such cockamamie nonsense. This is the same guy who claims on his site that relativity does not forbid time travel. I think Hawking should stick to his Star-Trek voodoo physics ..."

    Actually, I doubt you know enough about the frontiers of physics to say whether Hawking's ideas on time travel are "voodoo" or not. (This isn't a personal insult; there are very, very few people in the world who have that level of knowledge. I know I don't.) I think the more important point is that being brilliant in one field (e.g. physics) doesn't necessarily qualify you to make judgements in another (e.g. A.I.)

    For example, James Randi has often pointed out that scientists are easily deceived by paranormal fakers -- because as scientists, they expect to be able to uncover the truth about strange situations, but the fakers are operating in the realm of stage magic rather than science, and most scientists simply don't know anything about stage magic. It takes a stage magician to see through the tricks.

    As computers become more important to everyone's daily lives (and as much of they've done so already, I'm firmly convinced that we ain't seen nothin' yet) everyone will weigh in with their opinions on What It All Means. People like Hawking, who are used to being right about some pretty heavy-duty things, will naturally tend to believe themselves right about W.I.A.M. as well. They've got a right to their opinions, of course; the important thing is for the rest of us to treat their opinions as just that, and not words from on high.
  • by Pinball Wizard ( 161942 ) on Saturday September 01, 2001 @06:56PM (#2244049) Homepage Journal
    that Ray Kurzweil [kurzweilai.net] and Bill Joy [technetcast.com] have said.


    Three of today's greatest scientists all agree - we are looking at a future where humans become cyborgs or else risk being a loser in the game of evolution.


    We will gradually turn into machines - because economics will force us to in order to compete successfully. Those who don't will likely become slaves of those who do. Those that decide to enhance their lifespan and abilities through the use of computer enhancements will survive and thrive in the future.


    Kurzweil actually takes this thought out to the point where we are just software - our DNA - and therefore can transfer the essence of our being from machine to machine once the tech is fully developed.


    I notice a lot of /.ers disagree. Hmm...who do I believe, the greatest thinkers of our time or a bunch of /.ers? Yep, the future looks pretty scary(or bright, depending on your POV).

  • Re:Enslavement? (Score:1, Interesting)

    by Anonymous Coward on Saturday September 01, 2001 @06:58PM (#2244057)
    Watch less Star Trek, read more biology and history. I have a vague recollection of some hive dwelling insects practicing "slavery", losers of a "war" being forced to labor for winners. Ambition a human drive, that's laughable, much of human ambition derives from mate selection and passing on or protecting one's genetic material. When AI comes into existence it probably won't resemble Star Trek or Asimov fantasies. Benevolence does not necessarily follow intelligence, real or artificial. Human intelligence evolved to match our environment and/or function, an artificial intelligence may do so also. AIs may be creatures of the environments we put them in and the functions we have them perform. I don't think there is anything special or magical about an intelligence housed in metal/plastic compared to an intelligence housed in meat.
  • by bl968 ( 190792 ) on Saturday September 01, 2001 @07:31PM (#2244114) Journal
    In actuality, Alan Turing said "If a person was unable to tell the difference between a conversation with a machine and a human, then the machine could reasonably be described as intelligent." This is a very basic description of the Turing test [abelard.org], which is a measure of the level of artificial intelligence of a computer system.

    The Artificial Intelligence Enterprises located in Tel Aviv are working on a computer system [ananova.com], which they hope will be able to be mistaken for a 5-year-old child. They claim to have made a breakthrough. It is just a short step from a 5-year-old child to a thinking adult. In addition, you must consider mental illness and even the potential for envy, greed, rage, and hatred once you reach that plateau

    You can find more AI news at The Mining Co AI pages [miningco.com]
  • by TheFrood ( 163934 ) on Saturday September 01, 2001 @07:34PM (#2244118) Homepage Journal
    The first person I heard put forth this idea was Vernor Vinge, the SF writer who also came up with the idea of the Singularity (the point where the pace of technological advance becomes so fast that it's impossible to predict what happens afterward.) He referred to the concept of linking human minds to computers as "Intelligence Amplification," abbreviated IA.

    Vinge suggested that IA research could be spurred by having an annual chess tournament for human/computer teams. This doesn't even require cyborg-type implants; it could be started today, simply by having the human players use a terminal to access their computers. The idea would be to set up a system that harnesses the intuition/insight/nonlinear-thinking of the human and supplements it with the raw computing power of the machine (perhaps by letting the human "try out" various moves on the computer and having the computer project the likely future positions 10 or so moves ahead.) In theory, a human-computer team should be able to trounce any existing coputer program or any human playing alone.

    TheFrood

  • by Giant Hairy Spider ( 467310 ) on Saturday September 01, 2001 @07:55PM (#2244141)
    I disagree. The evolutionary method cannot possibly create an AI within the lifetimes of the experimenter. The number of variations is astronomical and our computers are too limited. The best you can hope for are a few limited domain toys.

    We've been producing "limited domain toys" for decades. It doesn't say anything about what we will do twenty or fifty years from now.

    Ever see the experiment where they modelled the evolution of the eye through random mutations? In the real world, it took many millions of years. I don't know the exact length of the experiment, but it obviously wasn't comparable to the real-world process.

    The problem now is that computers are too small, slow, and simple, with too little memory to house an intelligence remotely comparable with a human's. One can't fit, so one can't evolve.

    What happens when computers are a hundred-thousand times faster, with a hundred-thousand times more memory? What couldn't fit in a researcher's entire lifetime now will happen in a moment.

    At any rate, any development process will have failures and successes. The successes will be rewarded with survival and reproduction. If there is an intelligence, we can't know that it hasn't taken survival and reproduction as its goal, and our measure of success as merely a means to its goal.
  • by dragons_flight ( 515217 ) on Saturday September 01, 2001 @08:00PM (#2244149) Homepage
    Have we?

    Seems we haven't so much short circuited as replaced evolution. If we look at the American ideal of getting ahead through hard work and intelligence, then in some sense we are selecting the most suited of each generation. Now of course I said ideal, it doesn't quite work out in practice, but other things being equal, someone who is more adapted to the modern world is more likely to rise.

    Once someone does succeed and gets wealthy (the typical measure of success), then they convey an advantage to their offspring by way of better schooling, plentiful food, good medical care, access to all the right people, and more varied experience, etc. It doesn't really even matter whether it's their offspring, so long as they spend money to benefit skilled well-adapted people.

    It doesn't matter that people of lesser caliber remain in the gene pool, as it's rare to see mixing among different socio-economic strata anyway. Not to mention that even at the lowest levels people will rise based on merit, as well. The fact that the less well off classes typically reproduce more doesn't matter at current since the US has a much larger middle class than we do poverty class (not the case in many places world wide), and the middle class are historically unlikely to start a revolt or anything similar, to destablize the system we have now.

    The real potential of genetic modification isn't for restarting evolution, it's for advancing faster and in ways that no segment of humanity currently has an ability for. Waiting around for evoltion to randomly generate adaptive traits is a slow process, and if we can do better with our intelligence then it might be worth it.
  • by sg_oneill ( 159032 ) on Saturday September 01, 2001 @11:12PM (#2244514)
    One of the things it's worth mentioning, is that the biological neural net type arangement of the human brain is not necessarily the most efficient arangement of 'stuff' to produce any sort of intelligence. It certainly is a good one, but not necessarrily the best

    I think the point is, is that we'd probably be alright if we created pinochio and the thing thought like us.

    It's that the thing probably would NOT think like us that is the concern. The thing would not necessarily *have* to be in any way recognisable as intelligent, but simply have to 'think' quicker and deeper, and have for some reason a good reason to supress humans (such as not being turned off!)

    In point they don't need to match biology, just provide a viable alternative
  • Security, Please? (Score:3, Interesting)

    by Guppy06 ( 410832 ) on Saturday September 01, 2001 @11:20PM (#2244531)
    Interfacing your brain directly to a piece of electronics is all well and good until you start thinking about all the problems computers have nowadays with electronic attacks. Maybe I've seen Ghost in the Shell one too many times, but I want to be DAMNED sure about the computer I'm plugging directly into my brain.
  • Re:Enslavement? (Score:2, Interesting)

    by the_2nd_coming ( 444906 ) on Sunday September 02, 2001 @12:25AM (#2244647) Homepage
    it is sort of funny....when you look at small tribes of natives in the amazon, everyone is helping everyone else, they have a community that looks out for each other, very social.

    when you look at humans in the "civilized" world, however, we become selfish, greedy, and competitive against one another, very A-social.

    odd, the more scarce the resources the more social we are, the more abundant, the more selfish we become. perhapes it all comes back to the looking out for number one. in the tribe, to look out for your self means you ned everyone else, so you look out for the rest of the tribe, but in the "civilized" world, it is easy to make it on your own, and infact it is easy to hord, looking out for number one gets so simple, that we begin to take more than our fair share to make life even better for ourself.

    any way you look at it, we are selfish.
  • by Stalyn ( 662 ) on Sunday September 02, 2001 @12:26AM (#2244653) Homepage Journal
    I think many of us are making the mistake of thinking slavery means robots with whips while humans work in the fields. I do not think Hawking has this kind of slavery in mind. What IS possible is that humans will become so dependent on intelligent machinery that we can not survive without them.

    The Unabomber (another crackpot) came to a similar conclusion. As machines get more complex fewer and fewer human beings will be able to control them (program, maintain, produce, etc). Yet right now we have a pretty good thing going. We keep the machines running and being manufactured. However over time many of these duties might be handed over to more intelligent machines. Then who will have control over them? The machines themselves.

    Look at how much we depend on machinery today. The Y2K vapor crisis has people so scared that they wouldn't have power they started to panic. They firmly believed that without electricty to power their toys they would not be able to survive. Imagine in 50 or 100 years. If we continue to hand over duties and jobs to machinery it is only a matter of time that without them we WILL NOT be able to survive. And if machines no longer need us to maintaim them, the human race will be nothing more then a domesticated cat.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...