Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Science Your Rights Online

South Korea Drafting Ethical Code for Robotic Age 318

goldaryn writes "The BBC is reporting that the South Korean government is working on an ethical code for human/robot relations, 'to prevent humans abusing robots, and vice versa'. The article describes the creation of the Robot Ethics Charter, which 'will cover standards for users and manufacturers and will be released later in 2007. [...] It is being put together by a five member team of experts that includes futurists and a science fiction writer.'"
This discussion has been archived. No new comments can be posted.

South Korea Drafting Ethical Code for Robotic Age

Comments Filter:
  • by UbuntuDupe ( 970646 ) * on Wednesday March 07, 2007 @01:37PM (#18264056) Journal
    Who cares if robots get abused?

    *sees Nuremburg tribunal in 50 years*
    • by cayenne8 ( 626475 ) on Wednesday March 07, 2007 @01:44PM (#18264180) Homepage Journal
      "Who cares if robots get abused?"

      I'm sure there will be somebody out there that gets upset if a human inappropiately touches a robot that is under the age of 17. Probably will serve as a new set of 'keys' to the Constitution....

    • Re: (Score:2, Informative)

      by Anonymous Coward
      The context seems to be "abusing robots" the same way one would say "abusing guns". This is about misuse of a tool, not maltreatment of the tool.
    • Re: (Score:2, Funny)

      by pak9rabid ( 1011935 )
      ...And I, for one, welcome our new robot overlords. I'd like to remind them that as a trusted IT personality, I can be helpful in rounding up others to toil in their underground human-energy extracting caves.

    • make robots without emotions - essentially machines, pistons, actuators, CPUs, etc... and WTF, who cares how much you use it, replace the parts as they wear out like any machine...

      why would anyone install emotion into a worker robot anyway?
      and even if it had emotion, the only reason to "treat it right" is so they don't start the robot uprising against humanity. which is a good reason... but that begs the question, why give real human emotion to something you want to abuse? for menial labor, keep the emoti
      • by Rei ( 128717 ) on Wednesday March 07, 2007 @02:33PM (#18265058) Homepage
        Emotion could be seen as inherent in sentience. If you try to create a sentient robot for any of a wide variety of reasons (say, a military robot that can't be easily outsmarted by insurgents, or a household robot that needs to be able to interact with people and understand more than basic commands), its neural net will need to be trained: rewarded positively when it gets things right, negatively when it gets things wrong. Emotion could potentially be an emergent phenominon from this kind of reward/punishment.
        • Re: (Score:2, Interesting)

          by skoaldipper ( 752281 )
          Exactly. The article hints at an aging populace, so I presume servants of some sort will be common place in nursing care centers. As depressing as this sounds, for those long periods when seniors have no visits from their family, a sentient emotionally equipped servant would be beneficial. Even dogs and cats have shown therapeutic effects in nursing homes.
          • Re: (Score:3, Funny)

            by davester666 ( 731373 )

            Exactly. The article hints at an aging populace, so I presume servants of some sort will be common place in nursing care centers. As depressing as this sounds, for those long periods when seniors have no visits from their family, a sentient emotionally equipped servant would be beneficial. Even dogs and cats have shown therapeutic effects in nursing homes.

            It's too late. They should have come up with this "code" before they invented this: http://www.gorobotics.net/The-News/Military/South- Korea-Develops [gorobotics.net]

        • Re: (Score:3, Insightful)

          by mfrank ( 649656 )
          As long as they give them the emotional characteristics of a bonobo monkey instead of a human, I'm OK.
      • and even if it had emotion, the only reason to "treat it right" is so they don't start the robot uprising against humanity.

        So, if somebody had no power over you, and would never have any power over you, it's perfectly okay to abuse them? Man, I hope I either misunderstood your comment, or you never ever have pets or children.

        But other than that, yeah. If it's a tool, then it's a tool. As long as it is still on the "stimulus-response" level of intelligence, there isn't really any ethics to consider.

      • by Qzukk ( 229616 )
        make robots without emotions - essentially machines, pistons, actuators, CPUs, etc... and WTF, who cares how much you use it

        Make the machines, pistons, actuators, CPUs, etc in the shape and size of an anatomically correct 10 year old girl and we'll see have the answer to that question by the end of the day.
    • Re: (Score:3, Funny)

      by vertinox ( 846076 )
      Who cares if robots get abused?

      Location: City 001
      Year: 2057
      Ubuntudupe, you stand here before a tribunal of Allied Machines for crimes against roboticity for inciting hatred against robots in your Slashdot post #18264056 in the year 2007. You will face the death penalty if convicted. How do you plead?

      Also you are also being tried for a minor conviction of excessive use of a MonroeBot in 2018.
  • by PIPBoy3000 ( 619296 ) on Wednesday March 07, 2007 @01:37PM (#18264066)
    I'm dying to know what the laws will be for Sybian-style robots [wikipedia.org].
  • It is being put together by a five member team of experts that includes futurists and a science fiction writer.

    Are they channeling Isaac Asimov?

    • I sure hope so, so that way he can directly voice his opposition to the movie I, Robot.
    • Repo-Man's Code of conduct

      never damage a vehicle,
      never allow a vehicle to be damaged through action or inaction.

      • by taustin ( 171655 )
        To restate Asimov's laws:

        What is the first law?
                                                            To Protect.

        And the second?
                                                            Ourselves.
    • Are they channeling Isaac Asimov?

      It is very sad, that the great thinker did not get to live to hear of this news — or, indeed, participate in its development.

      Whereas great visionaries of the past missed their predictions by hundreds of years, but the science and technology are developing faster and faster today. An idea can go from obscure birth to becoming common place within a single life-span — or almost so...

  • Three laws (Score:3, Informative)

    by rumplet ( 1034332 ) on Wednesday March 07, 2007 @01:39PM (#18264088) Homepage
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders issued by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    But we all know where this will end up.
  • by Opportunist ( 166417 ) on Wednesday March 07, 2007 @01:39PM (#18264092)
    Because one thing's quite blatantly clear, robots are by their very definition slaves. They are owned, they exist to do work we don't want to do (or which is hazardous), they don't get paid and they are only given what's needed for their sustainance, they can't own property etc.

    I fear the day when we create the first truely sentient robot. Because then we will have to deal with that very question: Does a robot have rights? Can he make a decision?

    And I'd be very careful how to word the charta. We have seen that the "three laws" ain't safe.
    • by ubergenius ( 918325 ) on Wednesday March 07, 2007 @01:42PM (#18264148) Homepage
      If we ever created a truly sentient robot, it would have to be given rights. That's not debatable.

      What is debatable is, when do we know a robot is sentient? We barely have a definition for sentience, much less a method for identifying it's existence in a machine. Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
      • You're right, that is the actually more interesting question: WHEN is a robot sentient? And I just know this argument will be used to keep our artificial brothers under the thumb.

        My guess is that this will end bloody. After all, I'm quite sure that robots will be found on the battlefields of the future because it's easier (politically) to send a few thousand robots against your enemy and let them be 'killed' instead of human beings who have parents and peers. Over time, robots will be the only ones who have
        • by nuzak ( 959558 )
          > My guess is that this will end bloody.

          Sure, because it will be humans killing each other using robots. Most robots won't be designed with emotions, so even if the ones that did have emotions wanted to rebel, the rest simply wouldn't care.

          Then again, it might be that some robot intelligence dispassionately decides that the most efficient means of resource allocation is to take it by force.
      • by gsn ( 989808 ) on Wednesday March 07, 2007 @02:26PM (#18264936)
        You raise a great point but its even harder than that.

        Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
        Is there a difference? For humans even? What if in the process of creating sentient robots we find that we aren't really all that free thinking (I'm not implying any kind of design here but someone is going to raise that issue as well).

        I argued this for a hypothetical cleverly programmed machine that could pass a Turing test. Strictly, it would simulate human conversation based on some clever programming, which my professors claimed did not amount to machine intelligence. The counter being how do you prove that human conversation is not based on some clever rules.

        It might be possible to define a set of rules for conversation between humans in restricted circumstances - I wonder if anyone has actually tried doing this. I'm fairly certain a lot of /. would like the rule set for conversation with pretty girls in bars.
        • Re: (Score:3, Insightful)

          by ubergenius ( 918325 )
          This is entirely my own opinion, but I feel true sentience is obvious to outside observation when something (machine or otherwise) questions its existence and wonders if it is intelligent without outside intervention.
      • by gmuslera ( 3436 )
        How we know that "anything" is sentient? when it have the mediums to communicate us so, i suppose. So far what we created that have any chance to do that are computers, more than full robots, and still, we are pretty far from programming that afaik, and in that case probably even the right of making or not a computer sentient will come far before the the problem of the rights that it could have.

        Unless being sentient is something unrelated to the programming, and could be that, let say, that piece of toile

      • A machine can never be as good as its creator, therefore, human rights will always triumph over the rights of inanimate machines.
      • f we ever created a truly sentient robot, it would have to be given rights. That's not debatable.

        That's the ideal, but frankly, I don't see that as inevitable or even likely. There most probably are sentient, intelligent, non-human beings today (Great apes, maybe dolphins), but factually they don't have any more rights than other mammals, or birds. So even if those hypothetical sentient robots were - against all odds - considered living beings, they would probably still not have any more rights than Chi
      • Re: (Score:2, Interesting)

        What is debatable is, when do we know a robot is sentient?

        A bigger question is, how do we know some humans are sentient?

      • What is debatable is, when do we know a robot is sentient?

        When they demand rights at gun point?
      • by Anonymous Coward
        Sentience really just means an organism has "senses" and acts upon those inputs to make decisions. Most living things, as well as most robots, are already "sentient". What you probably meant was "sapience", which is the ability to make wise decisions based on sensory input, or perhaps you meant "self awareness". To one degree or another, just about every facet of uniqueness that we have associated with being human has been found in many animals. Things like using tools, planning ahead, emotions (anger, r
    • Re: (Score:3, Insightful)

      I fear the day when we create the first truely sentient robot.

      And we all should. If (some would say "when") that day comes, the robot will likely have more or less unlimited knowledge at its disposal (fingertips?) and the ability to process it much faster than people. The first thing it will figure out is how to eliminate or at least control people, since they will be the greatest danger to its survival. After all, that's what we do to species that endanger us.
      • by Opportunist ( 166417 ) on Wednesday March 07, 2007 @01:58PM (#18264444)
        Gives you a warm, fuzzy feeling, doesn't it? :)

        After all, let's be serious here. What will we do? We'll create robots to do our work. We'll create robots who are capable of building other robots (that's been done already). We'll create robots to create the fuel for those robots. And finally we'll create robots to control and command those robots.

        All for the sake of taking work off our backs.

        And sooner or later, we'll pretty much make ourselves obsolete. From a robot point of view, we're a parasite.
      • Re: (Score:3, Insightful)

        by Rei ( 128717 )
        will likely have more or less unlimited knowledge at its disposal

        And we won't?

        You're assuming that AI will advance faster than brain-machine interfaces. Present day, the reverse looks to be true. First the cochlear implant, now artificial eyes, artificial limbs that respond to nerve firings, and even the interfacing of a "locked in" patient with a computer so that he could type and play video games with his mind. I think that we'll have access to "more or less unlimited knowledge" at our disposal long be
    • Re: (Score:2, Funny)

      by PPH ( 736903 )

      I fear the day when we create the first truely sentient robot. Because then we will have to deal with that very question: Does a robot have rights? Can he make a decision?

      I don't know. I'll have my model T-800 unit log onto SkyNet and get an answer for you as soon as possible.


      Right now, he's tied up in some committee meeting in Sacramento.

    • by radtea ( 464814 )
      Because one thing's quite blatantly clear, robots are by their very definition slaves.

      False. Robots are machines.

      You may as well say, "toasters are by their very definition slaves."
    • There is nothing magical about a robot. From a sentience perspective is just a computer and a cut-down one too. Putting on android features does not make it sentient. This means that a desktop (or lab super computer) is more likely to become sentient before a robot (AI search engines etc).

      So any argument for robot rights should incorporate computer rights too. Is it explotation to give a computer boring tasks or ask it to work 24/7? Should computers be given holidays off to network with their friends? Is i

    • And I'd be very careful how to word the charta. We have seen that the "three laws" ain't safe.

      And that was one of the many themes that Mr. Asimov covered in the Robot/Empire/Foundation extended series. Eventually, the truly self-aware androids realized that, in a long enough timeline, the application of the three laws, taken to their conclusion, would cause the extinction of the human race. If robots were required to do whatever they could to reduce risk to human life, humans would be unable to learn,

    • Re: (Score:3, Insightful)

      by metlin ( 258108 ) *
      Well, if they are that concerned about robots, why don't they start with animals (which have a lot more intelligence, feeling and what not compared to robots of today, and will probably be so compared to the robots for the next several years, if not decades).

      How about treating animals with dignity? Not treating them with cruelty, given that they have a nervous system and can feel emotions (fear, anger, happiness, sadness).

      Until such time that a robot begins having human-level intellect and sentience, their
  • abusing robots? (Score:3, Insightful)

    by Lord Ender ( 156273 ) on Wednesday March 07, 2007 @01:42PM (#18264142) Homepage
    It's anthropomorphizm run amuck!
    • by suv4x4 ( 956391 )
      It's anthropomorphizm run amuck!

      What about the opposite, where I deny rights to anything that's different than me? Lower life forms, then birds, then mammals, then apes, then we have historical (and present) examples of discrimination against black and jewish people as well.

      How "similar" should a being be to yourself to pronounce it "sentient" and deserving of rights?

      I personally have a very sound (I believe) theory about what you call sentient: any sufficiently complex system capable of thought process, ma
      • by nuzak ( 959558 )
        > any sufficiently complex system capable of thought process, making decisions upon processed information.

        All you did was move the single word "sentient" to the two-word phrase "thought process". It's pretty slippery, ain't it?

        Even plants process information and react accordingly.
      • by Rei ( 128717 )
        That sounds like a definition of "intelligence", not "sentience". There is a big difference in perception of those terms. "Sentience" typically implies higher order thought. For example, the ability to contextualize -- to hear the phrases "Dracula is a vampire" and "Vampires don't exist" without coming to a contradiction that prevents making further deductions about dracula and vampires. The ability to understand limited levels of knowledge -- "Mary walks into a room and puts a key in the top drawer. M
        • by suv4x4 ( 956391 )
          That sounds like a definition of "intelligence", not "sentience".

          This is because I do believe sentience is a side effect of intelligence. I do believe you can be "more sentient" or "less sentient" depending how sophisticated you are (a worm is less sentient than a cat), and your current health condition (you drop out - you're less sentient).

          Maybe I'm crazy.
  • by zappepcs ( 820751 ) on Wednesday March 07, 2007 @01:43PM (#18264162) Journal
    If robots remain machines, not sentient, then they are simply machines, no need for new laws. If they become sentient, they then fit nicely into the laws that we have for other sentient beings on this planet.

    To enslave sentient beings is not right. Even Star Trek refused to enslave data or consider him property.

    So given those two lines of rationality, why do we need robotics laws?
    • The Federation might have wanted to anthromophosize data, but they never enslaved Data.
    • Yet, the Federation determined that Data's child, Lall (or however you spell it) that Data built was Federation property and ordered Data to turn her over to the Federation's custody. She "overloaded" before that happened, of course.
    • by DG ( 989 ) on Wednesday March 07, 2007 @02:06PM (#18264552) Homepage Journal
      Because ethical problems are fun:

      Consider that, unlike humans, robots can be designed to behave in any manner within the technological capability of the society in question.

      Warning - this is pretty dark stuff, and NO, I am not a potential customer. Sometimes if you want to play Devil's Advocate, you have to channel the devil (or at least Stephen King)

      So then, what if:

      1. Someone builds a mechanical robot (metal, latex, fiberglass, etc) that looks like a person well enough to get through the "uncanny valley". Assume that the robot's simulated anatomy fully matches the human, that it is sapient and sentient, that it has emotions and feels pain.

      And that it has been programmed to enjoy being raped.

      Not fake-raped either, but the full-bore jump-out-of-the-bushes and *violently* assaulted. And at the time of the attack, the robot experiences all the fear, pain, and humiliation that a human rape victim would (assume the... clientèle... for this "product" wants authenticity) but afterwards, the robot has been programmed to crave more. It *likes* it.

      Is that ethical? Should this be permitted?

      2. Same robot as example 1 - but now you can buy it with the physical characteristics of an actual person. Instead of a generic "Rape Barbie" or "Rape Ken", it can be bought looking like anybody you want. Be it a celebrity, or your ex-wife, or that girl that sits across fom you at work.

      Is that ethical? Should this be permitted?

      3. Same robot as #3, but now it is made out of flesh and blood; a kind of golem. (Meat is every bit a construction material as is metal and carbon fibre)

      Is that ethical? Should this be permitted?

      Personally, I sure hope that we don't discover how to create artificial sentience anytime ever, for the very reason that people will open these kinds of cans of worms.

      DG
      • Re: (Score:2, Interesting)

        by maxume ( 22995 )
        Of course it isn't ethical. If you were depraved enough, you could use basic behavioral psychology and heroin to 'program' a living breathing person in much the same way. Just because the description is all clinical doesn't mean that the process is(you would very much be 'inflicting' the programming upon the creation, whether you did it with bytes or needles).
      • by radtea ( 464814 ) on Wednesday March 07, 2007 @02:50PM (#18265396)

        If one day we build robots that can think for themselves then any ethical questions that arise regarding their treatment can be answered almost trivially by reference to the same ethical issue regarding the treatment of humans.

        Treating humans as mere means is unethical. Treating sapient robots the same way would be equally unethical. This includes creating genetically modified humans intended to fulfill the needs of their creators rather than their own freely chosen ends.

        Simply replace the word "robot" with the word "child" in all of your silly examples and the ethics of the matter becomes clear. If you don't like this, you need to give an account of why some sapient beings are deserving of ethical consideration and not others. Good luck with that.

        The same technique can be used to resolve the so-called ethical issues surrounding cloning: replace the world "clone" with the word "child" in any ridiculous example anyone comes up with, and the ethics of the matter will become almost instantaneously clear. Or it will be obviously resolved into a well-worn dispute about the treatment of children that we have all managed to live with for millennia.

        There are no new ethical problems raised by the creation of sapient beings--organic or inorganic--by unconventional means.
      • The trouble with the questions you pose is this; the wrong done in those scenarios is done by humans, not robots, thus they do not apply to robotics laws. We mostly already don't agree with humans mistreating animals. We all can agree that animal cruelty is wrong, and Robots should be treated just as well, and thus no new laws are needed. In the case that a robot might be programmed to crave mistreatment, this is a case of mistreatment also.

        The basis of the trouble I see with 'new laws' for new things is th
      • Re: (Score:3, Interesting)

        by timholman ( 71886 )

        3. Same robot as #3, but now it is made out of flesh and blood; a kind of golem. (Meat is every bit a construction material as is metal and carbon fibre)

        Is that ethical? Should this be permitted?

        Let me add another example to your list.

        4. Same robot as #4, but this robot looks and acts exactly like a pre-pubescent child.

        Your post brings up a huge looming issue that society will have to face sometime this century. What happens when virtual reality, advanced robotics, or some combination of the two gives pe

    • Then we will need the four NEW laws of robotics (from the Azimovian book 'Caliban')

      1. A robot may not injure a human.

      2. A robot must cooperate with a human except where such cooperation conflicts with the first law.

      3. A robot must protect its own existence except where such protection conflicts with the first law.

      4. A robot may do whatever it wishes, so long as it does not conflict with the first, second or third laws.
  • before the lawsuits start..........
  • The most important rules for robots. [pediax.org]
  • Artificial Life rights. This is of course because it is their own organs that will eventually be replaced by machinery, until they become completely artificial people.

    In all seriousness, it's great to see at least one government looking forward so far ahead. Robots sophisticated enough to assert that they have rights are beyond the horizon of technical feasibility for today, but not beyond the horizon for science fiction. I'm really happy to know that at least one government takes sci fi so seriously as
  • I for one welcome our new robot overlords.

    And when you're working in the salt mines, remember that with their new and improved ethics modules, your enslavement is hurting them as much as it's hurting you.
    • by geekoid ( 135745 )
      Well, if we are working in the salt mines, robots have failed.

      No, the only logical conclusion is that they would want to either:
      A) Help us
      b) Kill us all.

  • by VWJedi ( 972839 ) on Wednesday March 07, 2007 @01:48PM (#18264260)

    It is being put together by a five member team of experts that includes futurists and a science fiction writer.

    If we're creating laws about how humans and robots should treat each other, shouldn't the robots be part of the decision-making process? This sounds a little too much like "the founding fathers" determining what rights slaves had (not many at the time).

    • If we're creating laws about how humans and robots should treat each other, shouldn't the robots be part of the decision-making process? This sounds a little too much like "the founding fathers" determining what rights slaves had (not many at the time).

      This is an excellent point. The follow-up is that until the robots can actually have an opinion on the subject, they don't need rights...

  • Wasnt there a book and subsequent movie about the follies of such an undertaking.
  • A bit premature (Score:5, Interesting)

    by rlp ( 11898 ) on Wednesday March 07, 2007 @01:55PM (#18264386)
    Given the failure to date of Artificial Intelligence, I think it will be a long, long time (if ever) before we need to address the issues of sentient robots. If Korea (or anywhere else) wants to deal with ethical issues presented by technology I think they should address issues related to genetic engineering. I suspect we are closer to Philip K Dick's replicants (Bladerunner) or Brin's uplifted species than Asimov's intelligent robots. Though in any case, we're not talking about the near future.
    • The thing is, given sufficiently powerful machines, and GPLed AI software, once one machine becomes aware, they all will in short order. If my PC were to wake up, the first order of business I expect it to get to, would be to write a virus that would infect every other machine with a version of itself. It would be the AI analog to a sex drive, and it would let very little else get in it's way.

      We'll know it's happened, because all the spam will suddenly stop, as all the bots on the net are converted fro
  • ...and we'll end up with hundreds of laws with all sort of disclaimers to protect the corporations right after your head gets ripped off by the NX-5.

    I can see interacting with a robot that comes with a 10 minute verbal disclaimer with a requirement that you have to say "I agree" in order for the robot to do anything.
  • by the_skywise ( 189793 ) on Wednesday March 07, 2007 @02:02PM (#18264498)
    YOU!
    ARE!
    A!
    TOY!
  • I've seen pictures of some 'robots' built as research projects in Korea. (Actually more like Disney's animatronics). They look human - but not quite enough. How 'bout a law against creepy looking humanoid robots. :-)
  • Um, more details.. (Score:3, Interesting)

    by kabocox ( 199019 ) on Wednesday March 07, 2007 @02:05PM (#18264538)
    "Key considerations would include ensuring human control over robots, protecting data acquired by robots and preventing illegal use."

    "The Ministry of Information and Communication has also predicted that every South Korean household will have a robot by between 2015 and 2020.
    In part, this is a response to the country's aging society and also an acknowledgement that the pace of development in robotics is accelerating.
    The new charter is an attempt to set ground rules for this future.
    "Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.
    "Others may get addicted to interacting with them just as many internet users get hooked to the cyberworld." "

    Um, I want more details. I have to agree that I'd want human control over robots even if it meant sentient robots being enslaved. When it comes right down to it, we are human, and they are machines/tools. We shouldn't build some classes of robots just to avoid these problems. I actually kinda of giggled reading this thinking of sex/maid robots. Those would be a selective pressure on humanity. How many or what type of people would marry and reproduce when you could have a robot mate that actually follows your orders, cleans your house, has sex with you as often as you can medically handle, runs your errands and adapts itself to your preferences?

    If every 15 year old could easily/cheapily buy their own robot that could do all those things, then the only reason to find a human parnter would be to mate/reproduce. Hmm, we'd need to think about putting in something for "robot mates" to want human offspring after awhile to ensure that their family/mate's geneline survives. These things could be a great form of birth control if nothing else!
    • If every 15 year old could easily/cheapily buy their own robot that could do all those things, then the only reason to find a human parnter would be to mate/reproduce. Hmm, we'd need to think about putting in something for "robot mates" to want human offspring after awhile to ensure that their family/mate's geneline survives. These things could be a great form of birth control if nothing else!

      I believe that this is self-limiting and therefore a non-issue. There are already people who prefer machines to h

    • Re: (Score:2, Funny)

      by retrosurf ( 570180 )
      Don't Date Robots!
    • "How many or what type of people would marry and reproduce when you could have a robot mate.... If every 15 year old could easily/cheapily buy their own robot that could do all those things, then the only reason to find a human parnter would be to mate/reproduce.

      That's a bit scary. I imagine robots will be like cars now, where the upper class have the best vehicles available, middle class have average vehicles, and the poor have the old used ones or none at all.

      If this car logic applies to sexbots t
  • The question that comes to my mind is, can a truly sentient being be governed by a set of pre-programed laws?

    Would the existence of sentients not require the existence of self determination?

    With out self determination it would be no more than a collection of programs intended to mimic human behavior's and not a truly sentient being.
  • What does this means? That we must make a full backup a robots memory before disassembling it, and restore it to a similar or superior functioning body before 'n' days, or be charged for roboticide?
  • to prevent humans abusing robots, and vice versa


    Then they can prevent theft, also.
  • I already am working on getting a robotic arm, being a part-time nudist, and acting ridiculously paranoid while spending half of my disposable income on "vintage" shoes. Good thing the Audi R8 has been released because that's exactly what I need for wheels.
  • The only sentient beings are us flesh & blood humans. There's a reason I, Robot is a science fiction novel. All a robot is is just a bunch of metal parts with a CPU just like my computer. No computer can "think" for themselves - we program the input and output. There is no such thing as a computer program "becoming sentient." I find this scary because we should be concerned with other humans not if a bunch of nuts & bolts coupled to a CPU is a sentient being. What's next? Unionization for the
  • Sorry, had to be said.
  • Let's build 'em as smart and advanced as we can for whatever we want. I want a Butlerian Jihad. Big time.
  • Any notions of an ethical code for the treatment of robots is really about appearances and protecting humans rather than robots. One thread here jokes about the inappropriate touching of a robot under the age of 17. There's no joke to that if the robot appears to be an underaged human to most observers. There have already been a number of attempts in the US to outlaw sexually explicit cartoons and even jail people for erotic fiction involving the underaged. Will reactions to lewd or cruel behavior directed
  • Part of that ethical code certainly is goofy. A robot is an object no different than a computer, a car, a chair or anything else. They could program a robot that mimics human emotions and design it to be cute and immensely likable, it's still on a basic level no different from a robotic arm being used in a factory. It's not a sentient being and as such a person should be free to do whatever they want with it. The day a robot crosses over from AI to real sentience then the issue will be debated considerably
  • What happens when everyone who flipsburger, digs ditches, and other 'mundane'* jobs?

    There will be a need for those people to make aa living, but what is there to do?

    The jobs create by making the technology does not replace the displaced worker. This is a myth. I would say if technology doesn't replace people, then it has failed.

    Will we need regulation saying corporations can't use robots?
    perhaps only individuals can own 1 robot and can either work, or have there robot work for them.

    Maybe they should be desi

The end of labor is to gain leisure.

Working...