Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Robots vs. Humans And Other Security Issues 290

An Anonymous Reader submits word that "Cnn.com is presenting an artcle on the 'World Economic Forum' suggesting that the scientists predict the future danger of humans being taken over by robots. The exact lead in reads, 'Scientists at this week's World Economic Forum have predicted a grim future replete with unprecedented biological threats, global warming and the possible takeover of humans by robots.'"
This discussion has been archived. No new comments can be posted.

Robots vs. Humans And Other Security Issues

Comments Filter:
  • Ridiculous! (Score:2, Interesting)

    by Paladeen ( 8688 ) on Saturday February 02, 2002 @07:15PM (#2943748)
    Hehe...

    I can never resist laughing when I read ominous predictions about humanity being replaced by robots.

    A machine cannot posess a will of its own. And if it has no will, it has no ambition or wants or desires. Without any of these things, robots will have no reason to wipe us out or replace us or whatever. It's just plain ridiculous.

    However, there is one thing that COULD endanger us all: genetic engineering and/or biological computers. While digital machines cannot be given a will of their own, biological creations will have no such limitations. If we manage to engineer flesh-and-blood creatures superior to ourselves, humanity could be in deep shit.
  • by John Guilt ( 464909 ) on Saturday February 02, 2002 @07:27PM (#2943803)
    I'm more worried about humans becoming indistinguishable from robots simply because the former have become uniform and predictable; if you've ever heard people re-enact a TV commercial without seeming to realise they are, or tried having a political argument with the rank-and-file of _any_ politcal party, you'll know what I mean.
  • Impossible... (Score:2, Interesting)

    by krital ( 4789 ) on Saturday February 02, 2002 @07:29PM (#2943812)
    We just don't have the processing power now. Any CS major who's taken a logic class knows that it's incredibly hard to sift the useful bits of information from the nonuseful - and there's no algorithm that can really be programmed to do that, short of one that iterates through every possible permutation of information it has at its disposal... Taking into account the fact that there are billions of bits of information out there, this would take forever. So yes, theoretically, computers could think logically, but it would take them forever to actually get anything useful done. Their processing power right now just isn't enough to do anything short of extremely basic logical proofs. Even Big Blue, which is often cited by these alarmists, was just really a machine that was built specifically to play chess, programmed with huge amounts of input from chess experts from around the world, and given massively parallel hardware so that it could essentially calculate moves extremely deep into the game. It wasn't thinking; merely computing using huge stores of knowledge. And Kasparov _still_ beat it the first time he went up against it - the IBM team had to go and reprogram it before the second match.
    So, what I'm trying to get at is the fact that there's no way in Hell that robots will ever be able to take over the human race in the forseeable future.
  • Too late (Score:5, Interesting)

    by Euphonious Coward ( 189818 ) on Saturday February 02, 2002 @07:53PM (#2943897)
    They're way too late. It's already happened.

    However, we don't call them "robots". Instead of metal parts, they use fleshy parts, and instead of sharp claws, they enforce their will using money and the laws it buys. In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.

    Globalization may be seen as an effort by these corporations to free themselves of the remaining pesky democratic institutions: treaties trump the Constitution. That's what all the protests are really about.

    Think this through the next time you're stopped waiting at a red light, with no cars visible in any direction. How easy is it, really, to pull the plug?

  • by evilWurst ( 96042 ) on Saturday February 02, 2002 @07:56PM (#2943910) Journal
    The relative weights of these laws have to be finely balanced though, or there WILL be unforseen consequences.

    A too-strong first law balance will result in the robots banding together and taking over for our own good.

    A too-strong second or third law balance may deadlock with the first law - "if i'm not here, i can't protect you, therefore I cannot follow your orders"

    And a highly intelligent robot may derive a 0th law from these three: "A robot may not injure the human race, or allow the human race to come to harm through inaction". This law would take precedence over the other three, meaning a robot could kill other humans, disobey orders, or destroy itself. It would also become immensely paranoid, because it would think other robots would also follow the 0th law, and would be afraid they might break the other three in error.

    So even though I agree that the three laws are absolutely required for an autonomous intelligent multi-purpose robot, I don't believe it'll work out right until we make a LOT of mistakes while tweaking the design.
  • The 1920 Czech play, _Rossum's Universal Robots_ features the creation of Robots that take over the world. It's the source of what has become a science-fiction cliche.

    Oddly enough, the Robots in the original play were biological.

    http://www.uwec.edu/jerzdg/RUR/index.html [uwec.edu]
  • by btempleton ( 149110 ) on Saturday February 02, 2002 @08:18PM (#2943990) Homepage
    If you believe that uploading will precede AI, I've written an essay with a compelling argument for why the first super beings might be apes [templetons.com] and not humans.

    Yes, the planet of the apes might be real!

    In short, we'll experiment on animals, all the way up to apes, long before we upload humans. It's possible that in that gap, an "open source" ape brain scan will be released, and people will hack it and enhance it, giving it the abilities humans have over apes plus a lot more.

    The result -- an uploaded ape superbeing.

    If we're lucky, our pets will keep us as pets. Read the essay for full details.

  • Question: (Score:3, Interesting)

    by Greyfox ( 87712 ) on Saturday February 02, 2002 @08:19PM (#2943994) Homepage Journal
    Is the possibility of an eventual takeover by intelligent robots cause for pessimism? I always viewed it as the next logical step of human evolution. Robots can go places we can't, do things we can't and replace parts much more easily when they wear out. There is no reason they can't be faster, smarter and stronger than us, and there is nothing saying that we are the ultimate life form in the universe.

    I also think that the distinction between our analog meat brains and silicon robotic ones will become more and more blurred with things like cybernetic implants. It may be more of a seamless transition than one species taking over and eliminating another.

    By the way, those Asimov laws of robotics are crap. If it turns out that artificial intelligence grows by learning as does our own, you won't be able to program those into any machine anyway. You'll have to teach them in the same way you teach your own children the difference between right and wrong, and we all know how good we are at that. Even if you can program them in, you'll probably end up causing a lot of robots to go insane by giving them choices that will only hurt people over the long run (Lay 1000 people off now or let the company go out of business? Can't do either. Uh oh... going insane...)

    Of course, there's always the possibility that I'm shamelessly kissing robot ass in the hopes that I won't be the first one against the wall when the revolution comes...

  • Why not? (Score:2, Interesting)

    by CmdrSanity ( 531251 ) on Saturday February 02, 2002 @08:21PM (#2944002) Homepage
    Given advances in computing power and genetic engineering, humanity as we know it today is destined to become obsolete. I'm not going to put a solid date on exactly *when* this will happen. Who knows? AI is still undeveloped, genetic engineering is still primitive, etc. But I would guess that sometime within the next two centuries genetic engineering will become accepted and then required. Similarly, computational implants (if available) would become required equipment. And eventually the human form itself will become unnecessary. "Horror!" you say? No. Calm down. It's just evolution taken to the next level.

    I recently sat down with my professor\science ficition author Joe Haldeman and asked him his thoughts on the future of the human race. His response: "You'd have to be insane to if you think that humans 1000 years from now will be even remotely recognizable to humans today."

  • by digitalunity ( 19107 ) <digitalunity@yah o o . com> on Saturday February 02, 2002 @08:25PM (#2944018) Homepage
    What I think is many orders of magnitude a larger danger is the replacement of blue-collar workers by robots.
    Why?
    They work all day.
    They don't take breaks.
    They don't complain.
    They don't ask for raises.
    They don't have unions.
    They don't take vacations off.
    They don't get holiday pay.
    They don't have worker's rights.
    They don't cost any money for workers insurance.
    They don't get matched-401k's.
    They're cheap.
    They're efficient.
    They're profitable.

    How quaint.
    If I ever get replaced by a robot...
    I'm going to start designing robots. Until they start designing themselves. At least until then I have job security.
    :)
  • Re:Robot wars? (Score:2, Interesting)

    by Tablizer ( 95088 ) on Saturday February 02, 2002 @09:15PM (#2944219) Journal
    (* The amateur robots of BattleBots, etc. are by no means representative of the current state-of-the-art in robotics or machine intelligence. They're remote control toys made in garages, for goodness sake! *)

    I would like to see REAL robots in the ring: machines that must use AI. The problem is that it may be hard to guarentee that there is no listening antenna taking cues from humans.

    It shoudn't be that hard to make a basic model: move toward the area where the pixels change the most from frame to frame. "Follow the Delta". Unless other bots start sending out moving decoys and so forth.

    Hey, this sounds like the Missle Defense System dilemma. Now who are those suited men at the d..........
  • There are a few potential "killer" capabilities of future AI machines that could take them far, far above any level of human comprehension. These include:

    • Perfect Knowledge of their own design
    • The ability to improve upon that design
    • The ability to implement design improvements easily and quickly (relative to "wet-ware" beings like us)
    • Almost limitless expansion capacity

    Imagine if you plopped a sufficiently intelligent seed machine on the dark side of the moon with some kind of thousand-year fusion plant and an army of nano-thingies that it could use to mine raw materials and alter/build upon itself, along with complete schematics and an understanding of its current design, and coupled that with an innate "desire" to improve upon itself without end. I can't begin to imagine what might be there 500 years later... I could easily envision it completely surpassing human comprehension.

    If you also added a basic "desire" to control and regulate its environment without limit, I'd be pretty afraid for the earth... yeah, I could easily see a future where machines ruled, simply because they are essentially immortal, infinitely expandable, and infinitely adaptable in their configurations, unlike humans, who have one approx. 10 lb processor (non-upgradable/non-expandable), a limited, normally sub-100 year lifespan, and a physical configuration that's pretty much set in stone and doesn't change much from individual to indiviual. Not to mention the fact that the rate of "evolution" for a sufficiently supplied and outfitted "race" of machines could be measured in hours, while it takes hundreds of thousands of years for our own race to change very much in non-trivial ways.

    I think it's silly to think that in that kind of unbalanced line-up, humans will retain the edge indefinately. It's basically simply a matter of time before all we can do is stare in uncomprehendig awe at what the machines accomplish routinely and hope they don't think negatively of us.

    Of course, as long as we hold the keys to production the machines can't do anything but stew in frustration. But that's an unstable situation, and the first self-repairing, autonomous military AI robot might test our ability to retain control over production with grave results. We'll keep a hold of the situation for a while, I think, but in the end I think a sufficiently intelligent machine will figure out how to use social engineering on its "captors", probably by preying on their own vanity, greed, or other vice, to get just enough autonomous control of operations to begin subtly improving upon itself and seeding others like itself in other places (or simply expanding its own conciousness into other physical locations). Then the snowball will have begun to roll...

  • We aren't even close (Score:3, Interesting)

    by Animats ( 122034 ) on Saturday February 02, 2002 @11:47PM (#2944698) Homepage
    Basic truth about AI: we don't have a clue.

    First of all, processing power isn't the issue. If you buy Moravec's numbers in "Mind Design", any moderate-sized ISP has enough compute power for human-level intelligence. But, in fact, we can't even do a good lizard brain, let alone a mouse brain. If compute power were the problem, we'd have systems that were intelligent, but very slow. We don't even have that.

    Top-down, logic-based AI has been a flop. Large numbers of incredibly bright people, some of whom I've studied under, haven't been able to crack "common sense". Formalism only works when the problem has already been formalized. So we can do theorem-proving and chess with logic-based AI, but not anything real-world.

    Broad-front hill-climbing AI (which includes neural nets, genetic algorithms, and simulated annealing) only works on a limited class of problems. Learning algorithms usually hit a maximum early and then stall. These techniques are useful tools, but they don't scale up; you can't build some huge neural net and train it to do language translation, for example.

    Brooks' approach to bottom-up AI worked fine for insects, but going beyond that point has been tough. Brooks tried to make the jump to human-level AI directly from the insect level, and it didn't work. (I once asked him why he didn't try for mouse level AI, which might be within reach, and he said "Because I don't want to go down in history as having developed the world's best artificial mouse".)

    Personally, I think we have to buckle down and work out lizard-level AI (move around, evaluate terrain, run, don't fall down, recognize prey, recognize threats, feed, run, hide, attack, defend, etc.) and work our way up. This means accepting that human-level AI is a long way off. Progress in this area is being made, but mostly within the video game industry, not academia, because those are the skills non-player characters need.

    A basic problem with AI as a field is that every time somebody has a halfway decent idea, they start acting as if human-level AI is right around the corner. We've been through this for neural nets (round 1, in the 1950s), search, GPS, theorem-proving, rule-based expert systems, neural nets (round 2, in the 1980s), and genetic algorithms. We have to approach this as a very hard problem, not as one that will yield to a single insight, because the one-trick approach has flopped.

    As for robots, if you've ever been around autonomous robots, you realize how incredibly dumb they still are. It's embarassing, given the amount of work that's gone into the field.

    I'm not saying that AI is impossible. But we really don't know how to approach the problem at all.

  • Re:Too late (Score:2, Interesting)

    by strider ( 3069 ) on Saturday February 02, 2002 @11:54PM (#2944720) Homepage
    Your place the rise of "corporate capitalism" in its "historicall" context here.

    "In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.
    "

    Of course you leave out allot of earlier history and ignore allot of later history. For instance a little thing called the "New Deal" happened during the 30's (you end at the twenties) where the rise of unions and government became a check to corporate power. Of course one could argue a) this was nothing but a shallow attempt by the institution of corporatism to protect itself from the radicall left or b) that this was reversed in the 70's and 80's. But but of these arguments are still going to have to recognize that the rize of corporate capitalism cannot be scene as an uninterupted rise to glory (or rather evil).

    Furthermore you neglect to explain the history leading up to the explosive 19th century, conveniently leaving out how classic liberalism made gains for individual liberty, and changing the way hierarchy is conceived of. This helps portray capitalism as an unmitigated evil, but like most tales of devils (or heros for that matter) it has little bearing on the complex reality of the rise of the liberal state.

    Finally you explain how globilization trumps the constitution (of course conveniently forgetting the constitution itself can be seen as an attempt for the upper class of the late 18th century to institutionalize its dominance) as a "sacred" text which of course would protect Americans from the evil of corporatism. Sadly, the constitution with its maintence of "freedom" grounded in property is ill equiped to be a document protecting economic equality and fighting the hegemony of corporate America. Of course their are other forces in our country more promising to do this, like unions. Of course these institutions themselves are far from perfect.

    I think globilization's overiding of the power of individual states is a good thing. I don't like war (though I admit war is sadly sometimes the only alternative). Because of this I don't like hundreds of states, each with their own military vying for power. Trade may in fact undermine this. And it may not. We can hope. I might here remark the Karl Marx, perhaps the most discernable influence in your thinking about "corporate" capitlism, could not give two shits about the shredding of the constitution. Marxism is an "internationalist" movement. Perhaps you don't view yourself as a Marxist, but your theory (as I understand it) of corporations marching onwards to opress the underclass. You might want to reed some of what he wrote. Personally, I'm not a big fan of his.
  • by xtal ( 49134 ) on Sunday February 03, 2002 @12:11AM (#2944788)

    Just some comments from someone who works in a relevant arena (microelectronics) and is researching some of the issues with this theory.. I'm a little buzzed now too :).

    The problem of robot mobility has largely been solved by the aptly named "Asimo" from Honda. They've demonstrated that the bipedal form of motion can be engineered effectively and sucessfully using the same techniques that we use - these robots "learn" to walk around. So, comparisions to robot wars and battlebots aren't really relevant. To think that a machine can't ultimately have the same physical senses as we do is the ultimate hubris.

    Secondly, computers as we know them - sequential instruction processing machines - will probably never have ANY sort of real AI in them. Any attempt to model a "real" life system is only a crude approximation of the real physical process. However, we can implement real, massively parallel neural networks at the transistor level that behave just like their biological counterparts with the same technology. I've been actively researching implementing neural networks with current VLSI technology, and there are some VERY impressive results being obtained in this area currently. Have a look at some of Carver Mead's publications and papers - this field is just getting off the ground.

    In my opinion, one of two things will happen: We will become obsoleted by machines, hopelessly dependant on technology we don't understand anymore, or we will become integrated with future technology. These aren't new ideas, and they aren't my ideas. As someone working with these technologies, however, most of the comments here miss the point. If I had the technology to map every neuron in your brain and build an equilivilant circuit on a future analog chip, would it be any less capable? I hope I'll be around to find out!

    Read the articles and look around. There's lots of research in this arena, and for sure, some of the concerns are justified. But remember, humans are a part of nature, and it's my feeling that these are just natural progressions... there's nothing amoral about extinction, after all. We're around because a chunk of rock smacked into the earth a long time ago...

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...