South Korea Drafting Ethical Code for Robotic Age 318
goldaryn writes "The BBC is reporting that the South Korean government is working on an ethical code for human/robot relations, 'to prevent humans abusing robots, and vice versa'. The article describes the creation of the Robot Ethics Charter, which 'will cover standards for users and manufacturers and will be released later in 2007. [...] It is being put together by a five member team of experts that includes futurists and a science fiction writer.'"
Will the next step be "robot rights"? (Score:5, Insightful)
I fear the day when we create the first truely sentient robot. Because then we will have to deal with that very question: Does a robot have rights? Can he make a decision?
And I'd be very careful how to word the charta. We have seen that the "three laws" ain't safe.
abusing robots? (Score:3, Insightful)
Re:Will the next step be "robot rights"? (Score:5, Insightful)
What is debatable is, when do we know a robot is sentient? We barely have a definition for sentience, much less a method for identifying it's existence in a machine. Until we figure that out, it will be near impossible to tell if a robot is sentient or just really well programmed.
Re:What more do you need than... (Score:3, Insightful)
Who represents the robots? (Score:3, Insightful)
If we're creating laws about how humans and robots should treat each other, shouldn't the robots be part of the decision-making process? This sounds a little too much like "the founding fathers" determining what rights slaves had (not many at the time).
Re:Will the next step be "robot rights"? (Score:3, Insightful)
And we all should. If (some would say "when") that day comes, the robot will likely have more or less unlimited knowledge at its disposal (fingertips?) and the ability to process it much faster than people. The first thing it will figure out is how to eliminate or at least control people, since they will be the greatest danger to its survival. After all, that's what we do to species that endanger us.
Woody said it best... (Score:3, Insightful)
ARE!
A!
TOY!
seriously, why does anyone care? (Score:3, Insightful)
make robots without emotions - essentially machines, pistons, actuators, CPUs, etc... and WTF, who cares how much you use it, replace the parts as they wear out like any machine...
why would anyone install emotion into a worker robot anyway?
and even if it had emotion, the only reason to "treat it right" is so they don't start the robot uprising against humanity. which is a good reason... but that begs the question, why give real human emotion to something you want to abuse? for menial labor, keep the emotions out, let it be purely a machine.
this is a waste.
Re:Will the next step be "robot rights"? (Score:3, Insightful)
And we won't?
You're assuming that AI will advance faster than brain-machine interfaces. Present day, the reverse looks to be true. First the cochlear implant, now artificial eyes, artificial limbs that respond to nerve firings, and even the interfacing of a "locked in" patient with a computer so that he could type and play video games with his mind. I think that we'll have access to "more or less unlimited knowledge" at our disposal long before the first sentient AI.
Re:Will the next step be "robot rights"? (Score:4, Insightful)
I argued this for a hypothetical cleverly programmed machine that could pass a Turing test. Strictly, it would simulate human conversation based on some clever programming, which my professors claimed did not amount to machine intelligence. The counter being how do you prove that human conversation is not based on some clever rules.
It might be possible to define a set of rules for conversation between humans in restricted circumstances - I wonder if anyone has actually tried doing this. I'm fairly certain a lot of
Re:seriously, why does anyone care? (Score:4, Insightful)
Re:Will the next step be "robot rights"? (Score:3, Insightful)
Re:er, no... (Score:3, Insightful)
Re:But robots are *designed* (Score:2, Insightful)
Re:Will the next step be "robot rights"? (Score:3, Insightful)
How about treating animals with dignity? Not treating them with cruelty, given that they have a nervous system and can feel emotions (fear, anger, happiness, sadness).
Until such time that a robot begins having human-level intellect and sentience, their point is moot.
Start with animals, if you will before jumping on to some fictional far-fetched scenario.
Re:er, no... (Score:1, Insightful)
Sadists, racists, sexists, "illegal"-immigran-ists, extreme liberals/conservatives (commies/nazis) can all nonetheless be very cogent.
Re:I was about to agree with you, then... (Score:3, Insightful)
Does that make them less important?
More importantly... what does this have to do with whether or not ethics should be applied to non-human intelligence? If the reason that a human, a dog, a tree, and a rock have different 'levels' of rights is their different levels intelligence, than why would some (non-human) with roughly equal (or even higher) level of intelligence have a different set of rights?
Re:seriously, why does anyone care? (Score:3, Insightful)
Re:Will the next step be "robot rights"? (Score:3, Insightful)
Re:Will the next step be "robot rights"? (Score:4, Insightful)
To me, that's a pretty invalid argument. Why should I give a rat's ass what the universe's "purpose", if any, is for me. I only care what purpose I give myself. Even should the universe have a purpose for me and I learn what that purpose is, if it turns out to be something that I don't agree with, too bad for the universe. And yes, if God exists this applies to Him/Her/It as well.
I remember a short scifi story, I think it was in Clifford Simak's "Strangers in the Universe" collection. There humans on some planet spent millenia on building a computer that could answer any question. The questions were "What's the purpose of the Universe" and "What's the meaning of life." The answers were something like "The Universe has no purpose, the Universe just happened" and "Life has no meaning, life is an accident." After learning this, the humans abandoned all their technology and settled to an Amish-like lifestyle. Which as far as I'm concerned is fine if that's what they really wanted to do, but I see no reason why people should give up just because something higher than them doesn't assign them a purpose.
In other words, if God exists and he created humanity for no purpose other than to have someone to worship Him, would you accept that purpose, or would you attempt make your life have a meaning beyond that?
Does anyone here believe in a soul? (Score:1, Insightful)
Robots won't have souls even if they have AI. Abusing a robot will be like abusing a car -- stupid and expensive, but not cause for punishment or new laws.
Does anyone else not think it's funny they got a Science FICTION writer to help them? Emphasizing the word FICTION...
"Last year a UK government study predicted that in the next 50 years robots could demand the same rights as human beings". LOL
Re:View from the other perspecive.... (Score:2, Insightful)
Formatted.... Sorry. :(
A couple of the replies I've ready through have taken a very "it's not biological, so it's not sentient/concious/etc" attitude towards all this. Firstly, I'd like to ask "what defines something as concious/sentient?" Is it ability to react with an enviroment? An understanding of self? We can't even agree on what is true artificial intelligence let alone; what is considered artificial emotion (and to some, they're one in the same).
But just to entertain the idea: I would like for some of those people who are standing on the side of "they'll never be like us" to consider this...
Say we took a human; something we can all agree is both sentient and concious. At what point through modification (be it anything from something as simple as a hearing aids and eyeglasses to neurological implants and cybernetics) is the human no longer sentient or concious? Because the chassis started from a carbon as opposed to silicon, that makes it a true entity?
We are less then a decade (if we're not already in the entrance) of the nanotechnology and biological engineering revolution. Theoretically, our immunizations as a child could be a single syringe filled with nanomachines that act as white blood cells. Vaccinations would be obsolute because we wouldn't have to weaken the virus for them to fight it. As we slept, we could have modules on our night stands that connect to a hospotals servers and do a full "system" check on us. They could upload any information about possible virus strains and after a "course of action" was determined to fight it, you would get an email (or whatever notification) saying that there is an update avaliable and to visit your local hospital or clinic (for security reasons, the bedside module won't be able to make changes to your nanomachines). After walking through a metal detector that updates your nanomachines; you're out, on your way and immunized against all the diseases that could afflict you (or biological warfare outbreaks - which... when the technology first comes out would more then likely be the main motivator for everyone to upgrade. It got us using RFIDs, right? But that's another debate about a different topic).
Now apply that scenario to the question I posed earlier. Which is just one example of how we could be more inegrated into a system.
We are uncomfortable saying a machine could be sentient, but much more uncomfortable saying we could be less.
Re:seriously, why does anyone care? (Score:4, Insightful)
The question is where to invest your resources. Do you simplify your model of a certain feature so that it can be simplified mathematically for more effective computing power, but risk losing the effects caused by what you simplified? Do you do learning, genetic algorithm selection of fixed nets, or take the major computing power hit and try to do both? What biological features, exactly, do you choose to include? Or do you go for an abstract system not based on biology at all, but something that should lend itself to computing better? There are so many possible tradeoffs one can make.
A good example of how much a little effect can make a big differences comes from a (Navy?) audio research project several years back that I read about at the time. They had a net for audio processing, designed to detect submarines. Their earlier models had performed very poorly, but their latest had worked incredibly well. What did they change? Just one thing: they modelled the delay for signal propagation between neurons. That one little thing made their net go from performing a fraction as well as a human to performing many times better than a human.
Re:Will the next step be "robot rights"? (Score:2, Insightful)
Re:seriously, why does anyone care? (Score:3, Insightful)