South Korea Drafting Ethical Code for Robotic Age 318
goldaryn writes "The BBC is reporting that the South Korean government is working on an ethical code for human/robot relations, 'to prevent humans abusing robots, and vice versa'. The article describes the creation of the Robot Ethics Charter, which 'will cover standards for users and manufacturers and will be released later in 2007. [...] It is being put together by a five member team of experts that includes futurists and a science fiction writer.'"
Before anyone else says it... (Score:5, Funny)
*sees Nuremburg tribunal in 50 years*
Re:Before anyone else says it... (Score:4, Funny)
I'm sure there will be somebody out there that gets upset if a human inappropiately touches a robot that is under the age of 17. Probably will serve as a new set of 'keys' to the Constitution....
Re: (Score:2, Informative)
Re: (Score:2, Funny)
seriously, why does anyone care? (Score:3, Insightful)
make robots without emotions - essentially machines, pistons, actuators, CPUs, etc... and WTF, who cares how much you use it, replace the parts as they wear out like any machine...
why would anyone install emotion into a worker robot anyway?
and even if it had emotion, the only reason to "treat it right" is so they don't start the robot uprising against humanity. which is a good reason... but that begs the question, why give real human emotion to something you want to abuse? for menial labor, keep the emoti
Re:seriously, why does anyone care? (Score:4, Insightful)
Re: (Score:2, Interesting)
Re: (Score:3, Funny)
It's too late. They should have come up with this "code" before they invented this: http://www.gorobotics.net/The-News/Military/South- Korea-Develops [gorobotics.net]
Re: (Score:3, Insightful)
Re:seriously, why does anyone care? (Score:4, Insightful)
The question is where to invest your resources. Do you simplify your model of a certain feature so that it can be simplified mathematically for more effective computing power, but risk losing the effects caused by what you simplified? Do you do learning, genetic algorithm selection of fixed nets, or take the major computing power hit and try to do both? What biological features, exactly, do you choose to include? Or do you go for an abstract system not based on biology at all, but something that should lend itself to computing better? There are so many possible tradeoffs one can make.
A good example of how much a little effect can make a big differences comes from a (Navy?) audio research project several years back that I read about at the time. They had a net for audio processing, designed to detect submarines. Their earlier models had performed very poorly, but their latest had worked incredibly well. What did they change? Just one thing: they modelled the delay for signal propagation between neurons. That one little thing made their net go from performing a fraction as well as a human to performing many times better than a human.
I was about to agree with you, then... (Score:2)
So, if somebody had no power over you, and would never have any power over you, it's perfectly okay to abuse them? Man, I hope I either misunderstood your comment, or you never ever have pets or children.
But other than that, yeah. If it's a tool, then it's a tool. As long as it is still on the "stimulus-response" level of intelligence, there isn't really any ethics to consider.
Re: (Score:3, Insightful)
Does that make them less important?
More importantly... what does this have to do with whether or not ethics should be applied to non-human intelligence? If the reason that a human, a dog, a tree, and a rock have different 'levels' of rights is their different levels intelligence, than why would some (non-human) with roughly equal (or even higher) level of intelligence have a different set of rights?
Re: (Score:2)
Make the machines, pistons, actuators, CPUs, etc in the shape and size of an anatomically correct 10 year old girl and we'll see have the answer to that question by the end of the day.
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
Location: City 001
Year: 2057
Ubuntudupe, you stand here before a tribunal of Allied Machines for crimes against roboticity for inciting hatred against robots in your Slashdot post #18264056 in the year 2007. You will face the death penalty if convicted. How do you plead?
Also you are also being tried for a minor conviction of excessive use of a MonroeBot in 2018.
Sybian Robots? (Score:5, Funny)
When the Sybians revolt (Score:2)
My question is... (Score:2)
Are they channeling Isaac Asimov?
Re: (Score:2)
Repoman code (Score:2)
never damage a vehicle,
never allow a vehicle to be damaged through action or inaction.
Re: (Score:2)
What is the first law?
To Protect.
And the second?
Ourselves.
Isaac Asimov (Score:2)
It is very sad, that the great thinker did not get to live to hear of this news — or, indeed, participate in its development.
Whereas great visionaries of the past missed their predictions by hundreds of years, but the science and technology are developing faster and faster today. An idea can go from obscure birth to becoming common place within a single life-span — or almost so...
Three laws (Score:3, Informative)
2. A robot must obey orders issued by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But we all know where this will end up.
Re:Three laws (Score:5, Funny)
In another Will Smith summer blockbuster? God, I hope not.
Re: (Score:3, Funny)
2. Protect the Innocent
3. Uphold the Law
Uh, there is a fourth one but I can't....seem....to locate it...at the moment.
Re: (Score:3, Funny)
4. A robot must make wisecracks if in a film with Steve Guttenburg.
Re: (Score:3, Funny)
Re: (Score:2)
Will the next step be "robot rights"? (Score:5, Insightful)
I fear the day when we create the first truely sentient robot. Because then we will have to deal with that very question: Does a robot have rights? Can he make a decision?
And I'd be very careful how to word the charta. We have seen that the "three laws" ain't safe.
Re:Will the next step be "robot rights"? (Score:5, Insightful)
What is debatable is, when do we know a robot is sentient? We barely have a definition for sentience, much less a method for identifying it's existence in a machine. Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
Re: (Score:2)
My guess is that this will end bloody. After all, I'm quite sure that robots will be found on the battlefields of the future because it's easier (politically) to send a few thousand robots against your enemy and let them be 'killed' instead of human beings who have parents and peers. Over time, robots will be the only ones who have
Re: (Score:2)
Sure, because it will be humans killing each other using robots. Most robots won't be designed with emotions, so even if the ones that did have emotions wanted to rebel, the rest simply wouldn't care.
Then again, it might be that some robot intelligence dispassionately decides that the most efficient means of resource allocation is to take it by force.
Re:Will the next step be "robot rights"? (Score:4, Insightful)
I argued this for a hypothetical cleverly programmed machine that could pass a Turing test. Strictly, it would simulate human conversation based on some clever programming, which my professors claimed did not amount to machine intelligence. The counter being how do you prove that human conversation is not based on some clever rules.
It might be possible to define a set of rules for conversation between humans in restricted circumstances - I wonder if anyone has actually tried doing this. I'm fairly certain a lot of
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re:Will the next step be "robot rights"? (Score:4, Insightful)
To me, that's a pretty invalid argument. Why should I give a rat's ass what the universe's "purpose", if any, is for me. I only care what purpose I give myself. Even should the universe have a purpose for me and I learn what that purpose is, if it turns out to be something that I don't agree with, too bad for the universe. And yes, if God exists this applies to Him/Her/It as well.
I remember a short scifi story, I think it was in Clifford Simak's "Strangers in the Universe" collection. There humans on some planet spent millenia on building a computer that could answer any question. The questions were "What's the purpose of the Universe" and "What's the meaning of life." The answers were something like "The Universe has no purpose, the Universe just happened" and "Life has no meaning, life is an accident." After learning this, the humans abandoned all their technology and settled to an Amish-like lifestyle. Which as far as I'm concerned is fine if that's what they really wanted to do, but I see no reason why people should give up just because something higher than them doesn't assign them a purpose.
In other words, if God exists and he created humanity for no purpose other than to have someone to worship Him, would you accept that purpose, or would you attempt make your life have a meaning beyond that?
Re: (Score:2)
Unless being sentient is something unrelated to the programming, and could be that, let say, that piece of toile
Re: (Score:2)
not debatable? (Score:2)
That's the ideal, but frankly, I don't see that as inevitable or even likely. There most probably are sentient, intelligent, non-human beings today (Great apes, maybe dolphins), but factually they don't have any more rights than other mammals, or birds. So even if those hypothetical sentient robots were - against all odds - considered living beings, they would probably still not have any more rights than Chi
Re: (Score:2, Interesting)
A bigger question is, how do we know some humans are sentient?
Re: (Score:2)
When they demand rights at gun point?
Sapience, not sentience. (Score:3, Informative)
Re: (Score:3, Insightful)
And we all should. If (some would say "when") that day comes, the robot will likely have more or less unlimited knowledge at its disposal (fingertips?) and the ability to process it much faster than people. The first thing it will figure out is how to eliminate or at least control people, since they will be the greatest danger to its survival. After all, that's what we do to species that endanger us.
Re:Will the next step be "robot rights"? (Score:5, Interesting)
After all, let's be serious here. What will we do? We'll create robots to do our work. We'll create robots who are capable of building other robots (that's been done already). We'll create robots to create the fuel for those robots. And finally we'll create robots to control and command those robots.
All for the sake of taking work off our backs.
And sooner or later, we'll pretty much make ourselves obsolete. From a robot point of view, we're a parasite.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
And we won't?
You're assuming that AI will advance faster than brain-machine interfaces. Present day, the reverse looks to be true. First the cochlear implant, now artificial eyes, artificial limbs that respond to nerve firings, and even the interfacing of a "locked in" patient with a computer so that he could type and play video games with his mind. I think that we'll have access to "more or less unlimited knowledge" at our disposal long be
Re: (Score:2, Funny)
I don't know. I'll have my model T-800 unit log onto SkyNet and get an answer for you as soon as possible.
Right now, he's tied up in some committee meeting in Sacramento.
Re: (Score:2)
False. Robots are machines.
You may as well say, "toasters are by their very definition slaves."
Clippy Rights! (Score:2)
So any argument for robot rights should incorporate computer rights too. Is it explotation to give a computer boring tasks or ask it to work 24/7? Should computers be given holidays off to network with their friends? Is i
Zero to Three makes four (Score:2)
And that was one of the many themes that Mr. Asimov covered in the Robot/Empire/Foundation extended series. Eventually, the truly self-aware androids realized that, in a long enough timeline, the application of the three laws, taken to their conclusion, would cause the extinction of the human race. If robots were required to do whatever they could to reduce risk to human life, humans would be unable to learn,
Re: (Score:3, Insightful)
How about treating animals with dignity? Not treating them with cruelty, given that they have a nervous system and can feel emotions (fear, anger, happiness, sadness).
Until such time that a robot begins having human-level intellect and sentience, their
abusing robots? (Score:3, Insightful)
Re: (Score:2)
What about the opposite, where I deny rights to anything that's different than me? Lower life forms, then birds, then mammals, then apes, then we have historical (and present) examples of discrimination against black and jewish people as well.
How "similar" should a being be to yourself to pronounce it "sentient" and deserving of rights?
I personally have a very sound (I believe) theory about what you call sentient: any sufficiently complex system capable of thought process, ma
Re: (Score:2)
All you did was move the single word "sentient" to the two-word phrase "thought process". It's pretty slippery, ain't it?
Even plants process information and react accordingly.
Re: (Score:2)
Re: (Score:2)
This is because I do believe sentience is a side effect of intelligence. I do believe you can be "more sentient" or "less sentient" depending how sophisticated you are (a worm is less sentient than a cat), and your current health condition (you drop out - you're less sentient).
Maybe I'm crazy.
Re: (Score:2)
If it's dissimilar to you, that's enough for me, pondslime.
What if I'm a sentient pondslime? You can never be sure...
Re: (Score:2)
Yea, what they taught me is carbon, which is what life forms are built upon, just has this weird property of very easily forming complex structures, which is why RNA/DNA happened to evolve from carbon atoms.
And the various chemical properties of the various materials determined what we call "organic materials" today. Or do you believe god gave c
I have always had some issue with this (Score:3, Interesting)
To enslave sentient beings is not right. Even Star Trek refused to enslave data or consider him property.
So given those two lines of rationality, why do we need robotics laws?
Data (Score:2)
Re: (Score:2)
But robots are *designed* (Score:5, Interesting)
Consider that, unlike humans, robots can be designed to behave in any manner within the technological capability of the society in question.
Warning - this is pretty dark stuff, and NO, I am not a potential customer. Sometimes if you want to play Devil's Advocate, you have to channel the devil (or at least Stephen King)
So then, what if:
1. Someone builds a mechanical robot (metal, latex, fiberglass, etc) that looks like a person well enough to get through the "uncanny valley". Assume that the robot's simulated anatomy fully matches the human, that it is sapient and sentient, that it has emotions and feels pain.
And that it has been programmed to enjoy being raped.
Not fake-raped either, but the full-bore jump-out-of-the-bushes and *violently* assaulted. And at the time of the attack, the robot experiences all the fear, pain, and humiliation that a human rape victim would (assume the... clientèle... for this "product" wants authenticity) but afterwards, the robot has been programmed to crave more. It *likes* it.
Is that ethical? Should this be permitted?
2. Same robot as example 1 - but now you can buy it with the physical characteristics of an actual person. Instead of a generic "Rape Barbie" or "Rape Ken", it can be bought looking like anybody you want. Be it a celebrity, or your ex-wife, or that girl that sits across fom you at work.
Is that ethical? Should this be permitted?
3. Same robot as #3, but now it is made out of flesh and blood; a kind of golem. (Meat is every bit a construction material as is metal and carbon fibre)
Is that ethical? Should this be permitted?
Personally, I sure hope that we don't discover how to create artificial sentience anytime ever, for the very reason that people will open these kinds of cans of worms.
DG
Re: (Score:2, Interesting)
Re:But robots are *designed* (Score:5, Interesting)
If one day we build robots that can think for themselves then any ethical questions that arise regarding their treatment can be answered almost trivially by reference to the same ethical issue regarding the treatment of humans.
Treating humans as mere means is unethical. Treating sapient robots the same way would be equally unethical. This includes creating genetically modified humans intended to fulfill the needs of their creators rather than their own freely chosen ends.
Simply replace the word "robot" with the word "child" in all of your silly examples and the ethics of the matter becomes clear. If you don't like this, you need to give an account of why some sapient beings are deserving of ethical consideration and not others. Good luck with that.
The same technique can be used to resolve the so-called ethical issues surrounding cloning: replace the world "clone" with the word "child" in any ridiculous example anyone comes up with, and the ethics of the matter will become almost instantaneously clear. Or it will be obviously resolved into a well-worn dispute about the treatment of children that we have all managed to live with for millennia.
There are no new ethical problems raised by the creation of sapient beings--organic or inorganic--by unconventional means.
Re: (Score:3, Interesting)
What if robots achieved "more sentience" (say, collective sentience) - how would you feel being treated badly because you weren't considered "sentient enough"?
What if we encounter an alien species which clearly sees us as not sentient at all, from their perspective?
And what about individual humans? Some are sheeple and some actually think for themselves - what about sentience in that case?
Can I treat them differently because of that? And why not, since you
Re: (Score:2)
The basis of the trouble I see with 'new laws' for new things is th
Re: (Score:3, Interesting)
Let me add another example to your list.
4. Same robot as #4, but this robot looks and acts exactly like a pre-pubescent child.
Your post brings up a huge looming issue that society will have to face sometime this century. What happens when virtual reality, advanced robotics, or some combination of the two gives pe
Re: (Score:2)
1. A robot may not injure a human.
2. A robot must cooperate with a human except where such cooperation conflicts with the first law.
3. A robot must protect its own existence except where such protection conflicts with the first law.
4. A robot may do whatever it wishes, so long as it does not conflict with the first, second or third laws.
Emmm I better go dismantle sextron (Score:2)
No pushing, no shoving (Score:2)
In Korea, only old people care about... (Score:2)
In all seriousness, it's great to see at least one government looking forward so far ahead. Robots sophisticated enough to assert that they have rights are beyond the horizon of technical feasibility for today, but not beyond the horizon for science fiction. I'm really happy to know that at least one government takes sci fi so seriously as
Oblig (Score:2)
And when you're working in the salt mines, remember that with their new and improved ethics modules, your enslavement is hurting them as much as it's hurting you.
Re: (Score:2)
No, the only logical conclusion is that they would want to either:
A) Help us
b) Kill us all.
Who represents the robots? (Score:3, Insightful)
If we're creating laws about how humans and robots should treat each other, shouldn't the robots be part of the decision-making process? This sounds a little too much like "the founding fathers" determining what rights slaves had (not many at the time).
Re: (Score:2)
This is an excellent point. The follow-up is that until the robots can actually have an opinion on the subject, they don't need rights...
Re: (Score:2)
I didn't say until they can express an opinion, I said until they can HAVE an opinion.
There is an immense difference.
Dogs and cats have an opinion on abuse. Usually they are against it, although I have known some cats where if you bounce them off the wall they come back purring and begging for more. I haven't done that to an animal since I was a teenager, although th
I, Human (Score:2)
A bit premature (Score:5, Interesting)
Re: (Score:2)
We'll know it's happened, because all the spam will suddenly stop, as all the bots on the net are converted fro
3 laws and then the lawyers will show up (Score:2)
I can see interacting with a robot that comes with a 10 minute verbal disclaimer with a requirement that you have to say "I agree" in order for the robot to do anything.
Woody said it best... (Score:3, Insightful)
ARE!
A!
TOY!
Uncanny Valley (Score:2)
Um, more details.. (Score:3, Interesting)
"The Ministry of Information and Communication has also predicted that every South Korean household will have a robot by between 2015 and 2020.
In part, this is a response to the country's aging society and also an acknowledgement that the pace of development in robotics is accelerating.
The new charter is an attempt to set ground rules for this future.
"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.
"Others may get addicted to interacting with them just as many internet users get hooked to the cyberworld." "
Um, I want more details. I have to agree that I'd want human control over robots even if it meant sentient robots being enslaved. When it comes right down to it, we are human, and they are machines/tools. We shouldn't build some classes of robots just to avoid these problems. I actually kinda of giggled reading this thinking of sex/maid robots. Those would be a selective pressure on humanity. How many or what type of people would marry and reproduce when you could have a robot mate that actually follows your orders, cleans your house, has sex with you as often as you can medically handle, runs your errands and adapts itself to your preferences?
If every 15 year old could easily/cheapily buy their own robot that could do all those things, then the only reason to find a human parnter would be to mate/reproduce. Hmm, we'd need to think about putting in something for "robot mates" to want human offspring after awhile to ensure that their family/mate's geneline survives. These things could be a great form of birth control if nothing else!
Re: (Score:2)
I believe that this is self-limiting and therefore a non-issue. There are already people who prefer machines to h
Re: (Score:2, Funny)
Re: (Score:2)
That's a bit scary. I imagine robots will be like cars now, where the upper class have the best vehicles available, middle class have average vehicles, and the poor have the old used ones or none at all.
If this car logic applies to sexbots t
sentient and laws (Score:2)
Would the existence of sentients not require the existence of self determination?
With out self determination it would be no more than a collection of programs intended to mimic human behavior's and not a truly sentient being.
Ethics towards robots? (Score:2)
Great! Maybe they can work on the Theft Act next. (Score:2)
Then they can prevent theft, also.
I'm already preparing (Score:2)
anyone see this is scary and downright stupid? (Score:2)
Data is a toaster. (Score:2)
Ethics be damned.... (Score:2)
Really About Appearances (Score:2)
Forward-thinking. (Score:2)
Ethics isn't the issue, economics is.. (Score:2)
There will be a need for those people to make aa living, but what is there to do?
The jobs create by making the technology does not replace the displaced worker. This is a myth. I would say if technology doesn't replace people, then it has failed.
Will we need regulation saying corporations can't use robots?
perhaps only individuals can own 1 robot and can either work, or have there robot work for them.
Maybe they should be desi
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)