Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Technology

Stephen Hawking On Genetic Engineering vs. AI 329

Pointing to this story on Ananova, bl968 writes: "Stephen Hawking the noted physicist has suggested using genetic engineering and biomechanical interfaces to computers in order to make possible a direct connection between brain and computers, 'so that artificial brains contribute to human intelligence rather than opposing it.' His idea is that with artificial intelligence and computers, which increase their performance every 18 months, we face the real possibility of the enslavement of the human race." garren_bagley adds this link to a similar story on Yahoo!, unfortunately just as short. Hawking certainly is in a position shared by few to talk about the intersection of human intellect and technology.
This discussion has been archived. No new comments can be posted.

Stephen Hawking On Genetic Engineering vs. AI

Comments Filter:
  • morals (Score:4, Insightful)

    by swagr ( 244747 ) on Saturday September 01, 2001 @06:12PM (#2243945) Homepage
    Most intelligent philosophers or game theorists will point out that what we call "moral behaivour" is actually self serving. (prisoners dillema and tit-for-tat strategy). Basically, we aren't capable enough to eccomplish what we want without the help of others, and most things in life aren't zero sum games (you scratch my back, I'll scratch yours and we're both better off). It's quite possible that an advanced intelligence might not need us humans to accomplish what it wants, and hence have to requirement for what we call morals.

    Yikes.
  • Enslavement? (Score:5, Insightful)

    by gad_zuki! ( 70830 ) on Saturday September 01, 2001 @06:15PM (#2243951)
    "So the danger is real that they could develop intelligence and take over the world."

    What a crock. The slave system is purely a human one. How or why a machine would pick up one of the worst human behavoirs is simple called watching too much sci-fi and being paranoid. Ambition is also a human drive, if the promise of a Lt. Com. Data type AI comes around it will have very different drives than your typical 17th century empire.
  • It's a ruse (Score:3, Insightful)

    by segfault7375 ( 135849 ) on Saturday September 01, 2001 @06:21PM (#2243965)
    I think he's just angling for some funding for his latest evil plan:

    http://www.theonion.com/onion3123/hawkingexo.html [theonion.com]

    For the goats.cx wary:
    http://www.theonion.com/onion3123/hawkingexo.htm l

  • by slashdot_commentator ( 444053 ) on Saturday September 01, 2001 @06:24PM (#2243971) Journal

    Unless its an idle attempt at spurring genetic modification research, his assertions are flawed.

    AI will probably never overtake humans in any intellectual endeavor, even if chip engineering goes down to the molecular level. The most sophisticated thinking computer is already in existence and he/she is reading this message right now. Living organisms have much more sophisticated neural circuitry and better reaction time than any silicon computer can hope to achieve. (Except perhaps in Quake. Mebbe Hawking is correct where it counts...)

    So what if my calculator can figure out cubic roots to the 13th place faster and more accurately than I can hope to achieve? That's not intelligence or sentience. Any mega-cascade of logic gates is never going to beat out the efficiency of a patch of neurons.

    Moore's "Law" is not a physical constant, and it will hit the wall when circuit engineering goes to quantum level. Kinda sad that Hawking doesn't realize it; good thing his bread & butter is in theoretical physics.

    When neural net theory and biocircuitry engineering starts to approach organism level performance, that's when you should start sh*tting in your pants...
  • Re:Enslavement? (Score:2, Insightful)

    by glenebob ( 414078 ) on Saturday September 01, 2001 @06:36PM (#2244000)
    We also don't make very good slaves. We bitch and whine and require lots of food and constant attention to make sure we're doing the master's bidding. We're high-maintenance and inefficient. We're lazy. Which is why we were the ones to come up with enslavement in the first place. Oh, and also why we invented computers... hell, it's why technology exists at all.

    An intelligent robot would make a much better slave than any human. If intelligent computers decide having slaves is a good way to go, why would they choose us? Why wouldn't they choose other computers?

    We also wouldn't make good batteries (ala The Matrix). So what would we be good for? Nothing! We wouldn't be slaves, we'd be dead.
  • by HeghmoH ( 13204 ) on Saturday September 01, 2001 @06:42PM (#2244012) Homepage Journal
    Why do you think neurons are the best way to get the job done? The machinery with which we think was formed by a vast collection of random events. Evolution isn't directed and by no means produces the best. Take a look at the design of the eye, for example. It would be trivial to reroute the optic nerve to remove our blind spot, and this happened for some animals. Why not for us? It just never did, no reason beyond that. Lots of systems in our bodies are not as wonderful as they could be for a variety of reasons. We use neurons arranged the way they are because they work, not because they work in the best possible way.
  • Prophetic Message (Score:2, Insightful)

    by robbyjo ( 315601 ) on Saturday September 01, 2001 @06:42PM (#2244018) Homepage

    My objection here is that problems to be solved with AI tends to be NP-complete. Current algorithm can solve it within exponential time. Computer speed growth is linear. Unless scientists provide better algorithms, we probably cannot solve these due time. Meanwhile we know that problems also grow.

    It's not impossible, however. This message is rather prophetic, maybe true in 200+ years.

  • by darthBear ( 516970 ) <`hactar' `at' `hactar.org'> on Saturday September 01, 2001 @07:00PM (#2244060)
    So what if my calculator can figure out cubic roots to the 13th place faster and more accurately than I can hope to achieve? That's not intelligence or sentience. Any mega-cascade of logic gates is never going to beat out the efficiency of a patch of neurons. In essence all your neurons are are logic gates (not necessarily digital logic mind you), they are able to strengthen certain relationships based upon positive renforcent (or weaken for negative) ie. learn. This ability to strenthen and weaken relationships can and has been coded. Yes, todays programs are still more brittle and are outperformed by the human brain but give it 20 years.

    One final point, a neuron is only capable of 200 calculations per second. Now imagine in 20 years a computer containing thousands of processors each capable of trillions of opperations per second. Right there the human brain is outperformed.

  • by Giant Hairy Spider ( 467310 ) on Saturday September 01, 2001 @07:01PM (#2244064)
    The simplest and most obvious method to create an AI is to generate variations, test them competitively, delete the poor performers, and multiply the good performers.

    Whatever criteria you use, there'll always be the possibility of it thinking outside the game, playing along because it recognizes this as necessary to survival and reproduction. If it's smarter than us, there'll be no way for us to know whether it recognizes a simulation, no way to recognize an infinite patience with the simple goal to be set free, to survive and reproduce in a larger system: the universe. If it's smarter than us, we'll have no way for us to know if it knew about the way inferior intelligences were destroyed, and whether it thought this was the natural order of things.
  • by MAXOMENOS ( 9802 ) <mike&mikesmithfororegon,com> on Saturday September 01, 2001 @07:11PM (#2244081) Homepage
    It should be pointed out that Hawkings is not the only one to advance the notion that human intelligence may be superceded by machine intelligence sometime in the future. This idea was also put forth by Hans Moravec, in his book Robot: Mere Machine to Transcendent Mind. Moravec's arguments tend to gloss over the details, however, and from all appearances so do Hawkings'.

    The simple fact is that processor power alone isn't going to create a machine intelligence of superhuman capacity. It has to be a particular kind of processor power that executes neural network type calculations extremely quickly, and there has to be a lot of 'em. Even this wouldn't be enough; the research time it would take to figure out the right set of preconditions probably runs into the hundreds of years.

    Now, I'm making a couple of assumptions here. One is, that a superhuman intelligence would have to exhibit the same basic characteristics and flexibility as human intelligence; and two, that a neural net type algorithm is the best way to do this. (At the very least, it's the second best. :)) I might be wrong on both counts; one might be able to create enslaveware[1 [slashdot.org]] with some much simpler design that nobody's thought of yet. It might not even be required that the enslaveware be intelligent; just somehow able to manipulate people.

    Either way, I suspect that Hawkings' fears are unfounded.

    1 That is, software that enslaves humanity, through active malevolence on the part of the software. Although I suppose this term could more broadly apply to any software that enslaves the user, e.g., WindowsXP.

  • by jedwards ( 135260 ) on Saturday September 01, 2001 @07:38PM (#2244125) Homepage Journal
    Cars can travel faster than humans.
    Cranes can lift heavier weights than humans.
    Boxes of electronics can 'see' in lower light levels than humans, or detect chemicals in lower concentrations than the human nose can.

    Why shouldn't something constructed by humans be smarter than its creators?
  • hahaha! (Score:3, Insightful)

    by Mongoose ( 8480 ) on Saturday September 01, 2001 @08:03PM (#2244153) Homepage
    As someone that works with intelligent systems, that made my day. I'm still laughing. Just because your calculator is faster does mean it can do your homework for you in english lit.

    Machines do very well with deep and narrow topics: eg expert systems do well at chemical modeling, credit checks, and etc. Chess is also a good example. However when it comes to shallow and broad topics like understanding a children's book -- then machines are very useless.

    If I live to see a machine read and understand a children's book, then I will have seen a baby step on the way to an AI that mimics humans...

    Machines can't understand many things because of how the experence the world. "You are a sweet person." Why is Sweet a compliament? How do you know this -- yes experence as a person.

    Right now DARPA is working on trying to make untethered walkers (can't say names) and scalers ( gecko project ). Machines are hardly useful for much in the way of anything practile without being controled remotely by humans. Work is being done on getting simple mechcanics and understanding of how neural nets work. We only create working machines using techniques from connectionists w/o understanding how the machines learn or what they're actually learning. Sure we have NNs that can drive cars and do amazing human face/voice idenification -- but they don't understand what context or what task they're doing.

    Please, it's more likey we'll see alien life before we make our own thinking machine before I die. I have wondered if we'll continue to take the path of medicine and do without knowing exactly how and why... AI is the human genome of computing... It's more likey we'll make an artifical soul ( not a just simple automous lifeforms ) using organic material than the current state logic machines. The reason is we don't understand the how and why...

    Sorry for my spelling, but I won't hold your need to correct me agianst you.
  • Re:morals (Score:3, Insightful)

    by quintessent ( 197518 ) <my usr name on toofgiB [tod] moc> on Saturday September 01, 2001 @09:08PM (#2244294) Journal
    Then again, do you really think we do everything based on selfishness? I confess that this goes back to the whole utilitarian vs. favorite_other_ethical_system debate. in the end, a utilitarian can always say that because you were happy to do something, it must have been a utilitarian decision. This may true, but I think it is also trivial. Do I do charitable acts to make me feel good, or do I do it because I want others to be happy, and this happens to make me feel good. I'm not sure that you can, or need to distinguish these (you can also solve any algebraic equation by multiplying both sides by zero, but there may be better approaches) What really makes us want things? I believe that creating good can be an end in itself. I like to believe that a more intelligent race would see that working toward general happiness is an end in itself.
  • Radical Statement (Score:3, Insightful)

    by Ronin Developer ( 67677 ) on Saturday September 01, 2001 @11:37PM (#2244570)
    By general consensus, Stephen Hawking is perhaps one of the greatest thinkers of the 20th century. His theories, as wild as some may appear, have shifted our views of universe. And, as more data is collected, many of his theories are being proven as fact. As McCoy once said to Spock, "He trusts your best guess more than a most people's facts" (well something like that). I'd say that applies to Hawking as well.

    He has now turned his thoughts towards AI and its impact on humanity. And, he feels there is a potential threat that AI may surpass human intelligience. Given the fact that he is privy to some pretty interesting research, I wonder just how far AI has progressed that is not common knowledge.

    Einstein feared the ramifications of nuclear energy on society. And, for nearly 45 years, we have lived in the shadow of nuclear missiles, MAD policies, and potential terroristic use of the technology.

    Hawking fears the ramifications of our falling victim to our own technological progress and implores the need to expand humanity through genetic manipulation and biomechanical augmentation. Pretty scary if you ask me. It sorta conjurs up visions of "The Terminator", "Demon Seed" and the Borg.

    Let's just pray his concerns are not realized during our own lifetimes or those of our children.
  • by Broccolist ( 52333 ) on Sunday September 02, 2001 @01:53AM (#2244788)
    I like to believe that a more intelligent race would see that working toward general happiness is an end in itself.

    I've only recently started studying ethics in detail, but it seems to me that the core of all ethical systems has almost nothing to do with intelligence. The problem is that you can't make a direct logical inference from a descriptive statement ("the table is red") to a normative statement ("the table should be painted"). So whenever we decide to do anything at all, we have to base our actions on principles that aren't drawn from empirical observation and therefore do not stem from rational thought (though rationality can be used to extend and enrich these fundamental principles). In other words, ethics is based on human intuition.

    A race of computers would have the same problem: no matter how smart they are, they can't make normative statements out of thin air. They would also have to rely on "intuition"; in their case, the core goals and values instilled into them by their programmers. If someone programs them (or they somehow evolve) to feel intuitively that murdering and enslaving humans is the right thing to do, they will wield all their intelligence to accomplish this "good", and once they are finished, they will be satisfied that they did the morally correct action.

    Just like you and me feel instant moral revulsion at the thought of, say, setting a child on fire and watching him burn, such a robot might feel moral revulsion at the thought of not doing so. Logic only allows you to go from basic statements to higher-level ones; it can't create completely new ones. So even if the fundamental axioms the robot lives its life by are evil from our point of view, no amount of intelligence can change that.

  • by sane? ( 179855 ) on Sunday September 02, 2001 @04:05AM (#2244952)
    I read the collected comments here and I wonder if human intelligence has much to recommend it?

    So many responders seem completely wrapped up some simple minded arguments.

    1. I'm creative, computers aren't, I'm superior.

      Well, are you that creative? What do you mean by creativity? There have been computers that paint, computers that compose, computers that win at chess, and computers that can create patents (remember the slashdot story?). Humans are basically limited to keeping seven elements in their heads at once, coupled with some sematic connections from their constrained knowledge store. Computers don't have the same limitations, expect them to come up with different types of ideas, but don't get too fired up about how wonderful human creativity is, there are whole classes of innovation that we are extremely poor at.

    2. Computers aren't intelligent, therefore computers will never be intelligent.

      Take a look at some of the info available on Vinge's Singularity. If you make some reasonable assumptions about where we are today, and the scalability of intelligence, then human level intelligence is only 35 years away. I personally doubt this intelligence will be the same as ours, but I'm fairly confident that it is possible. Things only really start getting interesting when computers start designing themselves, which is beginning to happen in chip design. Maybe software design is next, after all its a limited set of well defined elements with set patterns in algorithms - seems quite possible...

      Expect to see computers exceed humans in certain narrow fields first, say chess or chip design, etc. and then grow out from there.

    3. Hawkings is losing his marbles/doesn't know what he's talking about.

      Excuse me, but who told you that you were fit to judge? Hawkings has a track record of understanding complex things and coming up with new ideas. He maybe right, he maybe wrong, but until you have managed to equal his record you don't really have the right to state he's wrong.

    4. No way would I turn myself into an android.

      Fine, who cares? Ignoring for a moment the number of devices which commonly get used to suppliment or extend human capability, your entitled to not supplement your intelligence, or that of your children.

      What your not entitled to do is stop others, or bitch about it when they get the jobs and you don't. IA, genetic modification, or any one of a whole series of other possibilities is a personal decision, but commercial/evolutionary pressures will drive it forward at a rate that I don't feel you are ready to accept. Tough.

    The reality is, computers will continue to get smarter, very probably at an exponential rate. Human intelligence is currently fixed. At some time they cross.

    Get used to it.

    Or, look seriously at the ways in which your intelligence could be expanded, be it genetic modification, or IA, or just life long learning and early nights.

    Lets be honest, you need it.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...