Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Science Your Rights Online

South Korea Drafting Ethical Code for Robotic Age 318

goldaryn writes "The BBC is reporting that the South Korean government is working on an ethical code for human/robot relations, 'to prevent humans abusing robots, and vice versa'. The article describes the creation of the Robot Ethics Charter, which 'will cover standards for users and manufacturers and will be released later in 2007. [...] It is being put together by a five member team of experts that includes futurists and a science fiction writer.'"
This discussion has been archived. No new comments can be posted.

South Korea Drafting Ethical Code for Robotic Age

Comments Filter:
  • by Opportunist ( 166417 ) on Wednesday March 07, 2007 @01:39PM (#18264092)
    Because one thing's quite blatantly clear, robots are by their very definition slaves. They are owned, they exist to do work we don't want to do (or which is hazardous), they don't get paid and they are only given what's needed for their sustainance, they can't own property etc.

    I fear the day when we create the first truely sentient robot. Because then we will have to deal with that very question: Does a robot have rights? Can he make a decision?

    And I'd be very careful how to word the charta. We have seen that the "three laws" ain't safe.
  • abusing robots? (Score:3, Insightful)

    by Lord Ender ( 156273 ) on Wednesday March 07, 2007 @01:42PM (#18264142) Homepage
    It's anthropomorphizm run amuck!
  • by ubergenius ( 918325 ) on Wednesday March 07, 2007 @01:42PM (#18264148) Homepage
    If we ever created a truly sentient robot, it would have to be given rights. That's not debatable.

    What is debatable is, when do we know a robot is sentient? We barely have a definition for sentience, much less a method for identifying it's existence in a machine. Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
  • by nuzak ( 959558 ) on Wednesday March 07, 2007 @01:48PM (#18264258) Journal
    I dunno, how about actual workable laws? Have you ever read I Robot? The stories are about the failings of these three laws. And Asimov himself has said that he never intended them to be anything more than a literary device ("shaggy dog stories" I believe is the term he used).
  • by VWJedi ( 972839 ) on Wednesday March 07, 2007 @01:48PM (#18264260)

    It is being put together by a five member team of experts that includes futurists and a science fiction writer.

    If we're creating laws about how humans and robots should treat each other, shouldn't the robots be part of the decision-making process? This sounds a little too much like "the founding fathers" determining what rights slaves had (not many at the time).

  • by oddaddresstrap ( 702574 ) on Wednesday March 07, 2007 @01:53PM (#18264348)
    I fear the day when we create the first truely sentient robot.

    And we all should. If (some would say "when") that day comes, the robot will likely have more or less unlimited knowledge at its disposal (fingertips?) and the ability to process it much faster than people. The first thing it will figure out is how to eliminate or at least control people, since they will be the greatest danger to its survival. After all, that's what we do to species that endanger us.
  • by the_skywise ( 189793 ) on Wednesday March 07, 2007 @02:02PM (#18264498)
    YOU!
    ARE!
    A!
    TOY!
  • by HelloKitty ( 71619 ) on Wednesday March 07, 2007 @02:16PM (#18264718) Homepage

    make robots without emotions - essentially machines, pistons, actuators, CPUs, etc... and WTF, who cares how much you use it, replace the parts as they wear out like any machine...

    why would anyone install emotion into a worker robot anyway?
    and even if it had emotion, the only reason to "treat it right" is so they don't start the robot uprising against humanity. which is a good reason... but that begs the question, why give real human emotion to something you want to abuse? for menial labor, keep the emotions out, let it be purely a machine.

    this is a waste.
  • by Rei ( 128717 ) on Wednesday March 07, 2007 @02:18PM (#18264750) Homepage
    will likely have more or less unlimited knowledge at its disposal

    And we won't?

    You're assuming that AI will advance faster than brain-machine interfaces. Present day, the reverse looks to be true. First the cochlear implant, now artificial eyes, artificial limbs that respond to nerve firings, and even the interfacing of a "locked in" patient with a computer so that he could type and play video games with his mind. I think that we'll have access to "more or less unlimited knowledge" at our disposal long before the first sentient AI.

  • by gsn ( 989808 ) on Wednesday March 07, 2007 @02:26PM (#18264936)
    You raise a great point but its even harder than that.

    Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
    Is there a difference? For humans even? What if in the process of creating sentient robots we find that we aren't really all that free thinking (I'm not implying any kind of design here but someone is going to raise that issue as well).

    I argued this for a hypothetical cleverly programmed machine that could pass a Turing test. Strictly, it would simulate human conversation based on some clever programming, which my professors claimed did not amount to machine intelligence. The counter being how do you prove that human conversation is not based on some clever rules.

    It might be possible to define a set of rules for conversation between humans in restricted circumstances - I wonder if anyone has actually tried doing this. I'm fairly certain a lot of /. would like the rule set for conversation with pretty girls in bars.
  • by Rei ( 128717 ) on Wednesday March 07, 2007 @02:33PM (#18265058) Homepage
    Emotion could be seen as inherent in sentience. If you try to create a sentient robot for any of a wide variety of reasons (say, a military robot that can't be easily outsmarted by insurgents, or a household robot that needs to be able to interact with people and understand more than basic commands), its neural net will need to be trained: rewarded positively when it gets things right, negatively when it gets things wrong. Emotion could potentially be an emergent phenominon from this kind of reward/punishment.
  • by ubergenius ( 918325 ) on Wednesday March 07, 2007 @02:35PM (#18265120) Homepage
    This is entirely my own opinion, but I feel true sentience is obvious to outside observation when something (machine or otherwise) questions its existence and wonders if it is intelligent without outside intervention.
  • Re:er, no... (Score:3, Insightful)

    by Metasquares ( 555685 ) <slashdot.metasquared@com> on Wednesday March 07, 2007 @03:06PM (#18265624) Homepage
    You assume that we would be capable of building an intelligent system while simultaneously asserting a biased opinion as truth in the agent's knowledge base. At the very least, you're going to have an agent with very distorted perceptions of the world.
  • by maxume ( 22995 ) on Wednesday March 07, 2007 @03:30PM (#18266020)
    The simplest line of reasoning compares it to child abuse. A newborn is marginally sentient, but there are all sorts of protections because it obviously will become sentient. Anybody 'giving birth' to sentient programs would be forced, in the face of reasonable legal practice, to tread lightly. Will people do evil things? Yes. Will those things be accepted by society at large? No. It will mirror todays problems.
  • by metlin ( 258108 ) * on Wednesday March 07, 2007 @03:52PM (#18266296) Journal
    Well, if they are that concerned about robots, why don't they start with animals (which have a lot more intelligence, feeling and what not compared to robots of today, and will probably be so compared to the robots for the next several years, if not decades).

    How about treating animals with dignity? Not treating them with cruelty, given that they have a nervous system and can feel emotions (fear, anger, happiness, sadness).

    Until such time that a robot begins having human-level intellect and sentience, their point is moot.

    Start with animals, if you will before jumping on to some fictional far-fetched scenario.
  • Re:er, no... (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 07, 2007 @04:18PM (#18266624)
    From my experience, a severely fucked-up worldviews are no obstacle to being a normal human:

    Sadists, racists, sexists, "illegal"-immigran-ists, extreme liberals/conservatives (commies/nazis) can all nonetheless be very cogent.
  • by Chmcginn ( 201645 ) on Wednesday March 07, 2007 @04:34PM (#18266828) Journal
    And we made up science, culture, and the arts, as well.

    Does that make them less important?

    More importantly... what does this have to do with whether or not ethics should be applied to non-human intelligence? If the reason that a human, a dog, a tree, and a rock have different 'levels' of rights is their different levels intelligence, than why would some (non-human) with roughly equal (or even higher) level of intelligence have a different set of rights?

  • by KDR_11k ( 778916 ) on Wednesday March 07, 2007 @05:00PM (#18267140)
    Animal abuse is illegal in many jurisdictions, it's quite thinkable that we'll have robots that are as intelligent as most pets.
  • by kalirion ( 728907 ) on Wednesday March 07, 2007 @05:32PM (#18267508)
    Any machine that mimics human behaviour could also mimic questioning its own existence. Besides, as I mentioned in another post, as I'm concerned, all mammals are most likely sentient. To me, sentience has to do with self-awareness, not intelligence or free will.
  • by kalirion ( 728907 ) on Wednesday March 07, 2007 @05:47PM (#18267710)
    Pascal's Wager basically says that if there's no god then it doesn't matter what you believe, but if there is a god then you had better believe in him. Even though Pascal's wager may be invalid when it comes to belief in a particular god, it may be reasonable when applied to a meaning for the universe as a whole. In other words, if the universe is pointless then it doesn't matter what you do, but if there is a purpose to the universe then it does matter what you do. If you don't know what the purpose is then I guess the first step is to figure out what the purpose is.

    To me, that's a pretty invalid argument. Why should I give a rat's ass what the universe's "purpose", if any, is for me. I only care what purpose I give myself. Even should the universe have a purpose for me and I learn what that purpose is, if it turns out to be something that I don't agree with, too bad for the universe. And yes, if God exists this applies to Him/Her/It as well.

    I remember a short scifi story, I think it was in Clifford Simak's "Strangers in the Universe" collection. There humans on some planet spent millenia on building a computer that could answer any question. The questions were "What's the purpose of the Universe" and "What's the meaning of life." The answers were something like "The Universe has no purpose, the Universe just happened" and "Life has no meaning, life is an accident." After learning this, the humans abandoned all their technology and settled to an Amish-like lifestyle. Which as far as I'm concerned is fine if that's what they really wanted to do, but I see no reason why people should give up just because something higher than them doesn't assign them a purpose.

    In other words, if God exists and he created humanity for no purpose other than to have someone to worship Him, would you accept that purpose, or would you attempt make your life have a meaning beyond that?
  • by Anonymous Coward on Wednesday March 07, 2007 @05:58PM (#18267834)
    Do you believe in the human soul?

    Robots won't have souls even if they have AI. Abusing a robot will be like abusing a car -- stupid and expensive, but not cause for punishment or new laws.

    Does anyone else not think it's funny they got a Science FICTION writer to help them? Emphasizing the word FICTION...

    "Last year a UK government study predicted that in the next 50 years robots could demand the same rights as human beings". LOL
  • by syntaxeater ( 1070272 ) on Wednesday March 07, 2007 @06:07PM (#18267946) Homepage

    Formatted.... Sorry. :(

    A couple of the replies I've ready through have taken a very "it's not biological, so it's not sentient/concious/etc" attitude towards all this. Firstly, I'd like to ask "what defines something as concious/sentient?" Is it ability to react with an enviroment? An understanding of self? We can't even agree on what is true artificial intelligence let alone; what is considered artificial emotion (and to some, they're one in the same).

    But just to entertain the idea: I would like for some of those people who are standing on the side of "they'll never be like us" to consider this...

    Say we took a human; something we can all agree is both sentient and concious. At what point through modification (be it anything from something as simple as a hearing aids and eyeglasses to neurological implants and cybernetics) is the human no longer sentient or concious? Because the chassis started from a carbon as opposed to silicon, that makes it a true entity?

    We are less then a decade (if we're not already in the entrance) of the nanotechnology and biological engineering revolution. Theoretically, our immunizations as a child could be a single syringe filled with nanomachines that act as white blood cells. Vaccinations would be obsolute because we wouldn't have to weaken the virus for them to fight it. As we slept, we could have modules on our night stands that connect to a hospotals servers and do a full "system" check on us. They could upload any information about possible virus strains and after a "course of action" was determined to fight it, you would get an email (or whatever notification) saying that there is an update avaliable and to visit your local hospital or clinic (for security reasons, the bedside module won't be able to make changes to your nanomachines). After walking through a metal detector that updates your nanomachines; you're out, on your way and immunized against all the diseases that could afflict you (or biological warfare outbreaks - which... when the technology first comes out would more then likely be the main motivator for everyone to upgrade. It got us using RFIDs, right? But that's another debate about a different topic).

    Now apply that scenario to the question I posed earlier. Which is just one example of how we could be more inegrated into a system.

    We are uncomfortable saying a machine could be sentient, but much more uncomfortable saying we could be less.

  • by Rei ( 128717 ) on Wednesday March 07, 2007 @06:16PM (#18268094) Homepage
    Which is why I didn't try to use my ANN as an appeal to authority, unlike the person I was replying to :) That's quite true: we don't know how far we are from sentience. However, it can be said that what we have is, from a purely subjective, observational standpoint, not even close. It's very good at pattern detection. Beyond that, well, we don't have much. Whether there is some small change that will lead to a big subjective leap, or whether it will take a long, tedious process of incremental improvements, who can say? I read an interesting paper a few months ago looking at why we haven't achieved more, postulating several theories. There's the "we just haven't thrown enough processing power at it" theory. There's the "There's something biological that we don't know about yet" theory. There's the "There are a combination of known biological factors that, while we know about and have tried them individually, it is their net action that is problematic" theory. And on, and on.

    The question is where to invest your resources. Do you simplify your model of a certain feature so that it can be simplified mathematically for more effective computing power, but risk losing the effects caused by what you simplified? Do you do learning, genetic algorithm selection of fixed nets, or take the major computing power hit and try to do both? What biological features, exactly, do you choose to include? Or do you go for an abstract system not based on biology at all, but something that should lend itself to computing better? There are so many possible tradeoffs one can make.

    A good example of how much a little effect can make a big differences comes from a (Navy?) audio research project several years back that I read about at the time. They had a net for audio processing, designed to detect submarines. Their earlier models had performed very poorly, but their latest had worked incredibly well. What did they change? Just one thing: they modelled the delay for signal propagation between neurons. That one little thing made their net go from performing a fraction as well as a human to performing many times better than a human.
  • by csplinter ( 734017 ) on Wednesday March 07, 2007 @06:25PM (#18268216) Journal
    I've have argued this point for years. Human thought is nothing more than a illusion of free will. At the very smallest level of being are particles that make up everything that must follow physical rules that cannot be broken, therefor with a magical computer that had knowledge of every particle of a human and all the stimulating particles around the human (indirectly every particle everywhere), all known at the exact same time, as well as knowledge of a fully unifying theory of physics, there is no reason you couldn't calculate every chemical reaction/movement/thought of the human in question. To me, when you consider this point you would have to agree that there is really no difference between a computer/computer program and a body/mind respectivly, only at this point there is no computer and computer program advanced enough to simulate humans. Although I don't find it hard to imagine a perfect simulation of a species of bacteria within a closed system would be possible within my lifetime.
  • by mfrank ( 649656 ) on Wednesday March 07, 2007 @07:39PM (#18269292)
    As long as they give them the emotional characteristics of a bonobo monkey instead of a human, I'm OK.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...