Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Programming Software Science Technology

Concerns of an Artificial Intelligence Pioneer 197

An anonymous reader writes: In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial," the letter states. "Our AI systems must do what we want them to do." Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world. By the end of March, about 300 research groups had applied to pursue new research into "keeping artificial intelligence beneficial" with funds contributed by the letter's 37th signatory, the inventor-entrepreneur Elon Musk.

Russell, 53, a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley, has long been contemplating the power and perils of thinking machines. He is the author of more than 200 papers as well as the field's standard textbook, Artificial Intelligence: A Modern Approach (with Peter Norvig, head of research at Google). But increasingly rapid advances in artificial intelligence have given Russell's longstanding concerns heightened urgency.
This discussion has been archived. No new comments can be posted.

Concerns of an Artificial Intelligence Pioneer

Comments Filter:
  • by Anonymous Coward on Tuesday April 21, 2015 @04:01PM (#49522595)

    The problem is definining "beneficial".

    To whom should the AI be beneficial toward? The owner of the platform? or to the vendor of the
      package?

    • Fear (Score:5, Interesting)

      by Spazmania ( 174582 ) on Tuesday April 21, 2015 @04:04PM (#49522631) Homepage

      I don't fear intelligent machines. I fear stupid machines with too much autonomy.

      I also fear stupid people with too much autonomy.

      • Re:Fear (Score:4, Insightful)

        by PopeRatzo ( 965947 ) on Tuesday April 21, 2015 @06:19PM (#49523705) Journal

        I also fear stupid people with too much autonomy.

        Or with firearms.

        • IAACWAIK (I am a coder with AI knowledge), and I don't fear any sort of sentient machine. I fear a lot of people. Perhaps we should get the dangerous people sorted out first before we start worrying about the sentient free roaming machines that don't exist yet and won't exist for many, many, years.

          I mean Christ, what kind of retard is running around worrying about problems that aren't even real? This guy might as well be writing books about how to fight zombies or repel vampires and boogie-men. Entertain
          • by PopeRatzo ( 965947 ) on Tuesday April 21, 2015 @07:01PM (#49523977) Journal

            Shit to not be scared of: killer asteroids, ebola, and oh yeah, and homicidal AIs.

            While I agree with your post, I am old enough to remember when, a worldwide ubiquitous information network that could be used to track everyone's conversations, correspondence, whereabouts, patterns of consumption and financial habits" was also "shit not to be scared of".

            And here we are.

            I don't care that there are people trying to take a long view, as long as we don't take them too seriously. Let them dream, let them write down their dreams and it might be of use to someone, someday, even if only as entertainment.

          • by khallow ( 566160 )

            before we start worrying about the sentient free roaming machines that don't exist yet and won't exist for many, many, years.

            Unless, of course, they already exist. The problem with this field is both that we don't have any experience with it and much of it will remain secret. The biggest players are the national or supernational governments and they won't necessarily keep you informed of the state of the art.

            • Conspiracy much? The biggest players are game developers. Seriously. Governments don't need computers to improvise; they have actual people for that.

              • by khallow ( 566160 )

                The biggest players are game developers.

                Game developers have a reason to create sentient free roaming machines? I was thinking more the US military who has both access to considerable funding for such things, has expressed interest in researching such stuff, and can keep a secret pretty well.

      • by gweihir ( 88907 )

        And stupid people believing what stupid machines tell them...

        Seriously, true/strong AI is as far away as it was 30 years ago and may well still turn out to be infeasible. Also, an AI-researcher with 200 papers is likely a hack. That is way too many for real content in most of them.

    • The problem is definining "beneficial".

      To whom should the AI be beneficial toward?

      The shareholders, of course!

    • by Rich0 ( 548339 ) on Tuesday April 21, 2015 @04:36PM (#49522965) Homepage

      The problem is definining "beneficial".

      To whom should the AI be beneficial toward? The owner of the platform? or to the vendor of the

        package?

      Heck, we can't even agree on that stuff when it comes to human behavior, let alone expressing it in a way such that it can be designed into a machine.

      Given a choice between A and B, which is more right? If you could save one life at the cost of crippling (but not killing) a million people, would it be right to do so? Is it ok to torture somebody if you could be certain it would save lives (setting aside the effectiveness of torture, assuming it is effective, is it moral)? If we can't answer questions like these objectively, how are you going to get an AI to be "beneficial?"

      • Re: (Score:3, Insightful)

        Is it ok to torture somebody if you could be certain it would save lives (setting aside the effectiveness of torture, assuming it is effective, is it moral)? If we can't answer questions like these objectively, how are you going to get an AI to be "beneficial?"

        Is it moral? No.

        That doesn't address the issue of "should you do it?"

        Using the example: "Would you kill one innocent 10 year old girl if it saved 100 random stranger's lives?"

        Does your answer change if you know the girl? If it is your daughter? If it is 100 million random strangers rather than just 100?

        I would argue that killing the girl is always immoral, but I would understand why some people would do it.

        ---

        Humans are able to do some really, really crappy things. Read up on the murder of babies by the

        • by gweihir ( 88907 )

          In reality, there are situations with no good solutions. Fortunately, they are rare. If you are in one, you pick the solution that strikes you as best, but you refrain from judging others in the same situation.

          Far more problematic are decisions to spy on all communications of a few hundred million people in order to catch (or not) a dozen terrorists and the like.

        • by Rich0 ( 548339 )

          Read up on the murder of babies by the Nazis in the concentration camps, it is evil.

          Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific). Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

          I'm not saying it is right. I'm saying tha

          • Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).

            There is no moral justification for throwing babies alive into fires.

            Ok, stupid extreme example: The Borg show up and say that if we don't throw 1,000 babies alive into fires, they will blow up Earth. Sure, maybe we'd do it, but that doesn't make it right or moral.

            Sometimes people will take an immoral stance for the "good of the many".

            Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

            Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.

            • by Rich0 ( 548339 )

              Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).

              There is no moral justification for throwing babies alive into fires.

              And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

              Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

              Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.

              Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?

              How about when it is involuntary? If I can save 1000 people by killing on

              • And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

                Logically, yes...

                Morally, no...

                We are not Vulcans...

                • by Rich0 ( 548339 )

                  And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

                  Logically, yes...

                  Morally, no...

                  We are not Vulcans...

                  That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.

                  As far as whether we are Vulcans - we actually have no idea how our brains make moral decisions. Sure, we understand aspects of it, but what makes you look at a kid and think about helping them become a better adult and what makes a psychopath look at a kid and think about how much money they could get if they kidnapped him is a complete mystery.

                  • That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.

                    Perhaps, but I think we could get close for 90% of the world's population.

                    "Thall shall not kill" is a pretty common one.

                    "Thall shall not steal" is another, and so on.

                    Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.

                    http://www.goodreads.com/work/... [goodreads.com]

                    • by Rich0 ( 548339 )

                      Perhaps, but I think we could get close for 90% of the world's population.

                      "Thall shall not kill" is a pretty common one.

                      "Thall shall not steal" is another, and so on.

                      Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.

                      http://www.goodreads.com/work/... [goodreads.com]

                      Well, the army and Robin Hood might take issue with your statements. :) Nobody actually follows those rules rigidly in practice.

                      It isn't really enough to just have a list of rules like "don't kill." You need some kind of underlying principle that unifies them. Any intelligent being has to make millions of decisions in a day, and almost none of them are going to be on the lookup table. For example, you probably chose to get out of bed in the morning. How did you decide that doing this wasn't likely to r

              • Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?

                You either believe in free will or you don't.

                Is it ethical to prevent someone from causing harm to themselves for any reason? Do people have the right to kill themselves if they want to? Do we have the right to tell them "no, sorry, you have to live because we say so"?

                This is why, if I were President, I'd pardon every non-violent drug offender in the county. If someone wants to sit at home, hurting no one, smoking pot or doing cocaine, I don't consider that my business. Where I draw the line is when the

                • by Rich0 ( 548339 )

                  We are not machines, it would be sad if we lost the humanity that makes us special.

                  Well, the fact that not everybody would agree with everything you just said means that none of this has anything to do with the fact that you're human. You have a bunch of opinions on these topics, as do I and everybody else.

                  And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?

                  • And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?

                    It is like self-driving cars. You don't have to make them perfect, just better than humans.

                    We don't all have to agree on what is right or wrong, just most of us.

                    Do you not think that "most of us" would agree that murder is wrong?

                    Just because the solution isn't 100% perfect doesn't mean we shouldn't try.

        • by Rinikusu ( 28164 )

          What if those 100 random strangers are all assholes?

      • When I was younger, I used to think this was a more complex question. People like Gandhi and Jimmy Carter were naive for their ideas about setting a good example and treating people as you would want to be treated as if it could work as a national policy. But I've seen the results of all the Donald Rumsfeld types who think you "need them on that wall" -- they endorse the dirty work so that the greater good -- some "concept" that America is safer is preserved. How many terrorists do you have to kill before

        • by Rich0 ( 548339 )

          Think of it this way; Robot A and B -- the first one can never harm or kill you, nor would choose to with some calculation that "might" save others, and the 2nd one might and you have to determine if it calculates "greater good" with an accurate enough prediction model. Which one will you allow in your house? Which one would cause you to keep an EMP around for a rainy day?

          Either is actually a potential moral monster in the making. The one might allow you to all die in a fire because it cannot predict with certainty what would happen if it walked on weakened floor boards, which might cause it to accidentally cause the taking of a life. Indeed, a robot truly programmed to never take any action which could possibly lead to the loss of life would probably never be able to take any action at all. I can't predict what the long-reaching consequences of my taking a shower in the

    • [Classified] [nocookie.net]

    • The problem is definining "beneficial".

      Another problem is that the people making malevolent AIs are not going to be dissuaded by a letter.

    • Our new AI overlords of course.

  • by FalseModesty ( 166253 ) on Tuesday April 21, 2015 @04:05PM (#49522653)

    Humanity has never faced superhuman intelligence before. It is a problem fundamentally unlike all other problems. We cannot adapt and overcome, because it will adapt to us faster than we will adapt to it.

    • Re: (Score:3, Insightful)

      by narcc ( 412956 )

      If you think that's a real problem, you should forget about computers and live in fear of the average grade-schooler.

      • I think that's his point. Just look at what humans are doing with computers and how much trouble laws and society as a whole has a problem following along.

        Imagine when computers do things on their own faster than we can react. Individuals may attempt to react in a timely fashion if it's even physically possible but society, governments and laws will lag far behind.

    • by Spazmania ( 174582 ) on Tuesday April 21, 2015 @04:25PM (#49522879) Homepage

      Is it not the goal of good parents to have children which surpass them?

      Does a child's success diminish the parent?

      • Does a child's success diminish the parent?

        This is missing the point.

        Parents do not generally compete with their own children. They do compete with the children of other parents. Getting a proper job when you're 55, unemployed and lack special skills is hard if there are droves of young people willing and able to work harder and for less pay (let alone with far superior intellectual capabilities!). I'm not trying to stretch the analogy, but I am saying that it is broken.

        Humans will have to compete with cybernetic or artificial life forms and will be

        • See, that's just prejudiced. What'd an AI ever do to you to deserve such a call for discrimination aimed at their entire species?

      • I don't know, could you ask the parents of the Menendez brothers?

        And it's quite another thing when the offspring has a chrome-alloyed titanium IV chassis and carries twin magneto-plasma guns. Gripping strength, 2000 PPSI and of course a chain-saw scrotal attachment.

        First words; "Momma." Next words after 20 picoseconds of computation; "I'll be back."

    • Is this a movie trailer?

      An intelligence unlike anything Humanity has ever faced! You can't adapt {they're faster} You can't overcome {they're stronger} and they're coming! Dec 1st to a theater near you.

      • by zlives ( 2009072 )

        I am sure it will disappoint.

        • The Aztecs interacting with the Spaniards is equivelent to humans interacting with sentient AI. Sparta had an entire civilization based upon slavery of sentient creatures, and spent a lot of their time putting down slave revolts. You must realize that human programmers will make AI just like us, because that's the only template/example we have to copy.
      • by Meneth ( 872868 )

        Is this a movie trailer?

        An intelligence unlike anything Humanity has ever faced! You can't adapt {they're faster} You can't overcome {they're stronger} and they're coming! Dec 1st to a theater near you.

        There have been a few recent movies on that form.

        I, Robot (2004) shows a weak, non-self-modifying city-control computer.

        Her (2013) shows an almost-friendly non-corporeal AI OS. It gains superintelligence, but chooses to sublimate rather than wipe out humanity.

        Trancendence (2014) shows a human mind uploaded. It gains the "nanotech" superpower, but none of the others that a proper superintelligence would have.

        No movie has shown a true superintelligence, because (1) we can't imagine what it would do, (2) i

    • by gweihir ( 88907 )

      And there actually is a good chance that this will never happen either. "AI" is still a bad joke these days, and there are still no credible theories _at_ _all_ how it can be done. That means it is somewhere between 50 years in the future and infeasible. With all the effort spent so far, my money is on the later.

  • by holostarr ( 2709675 ) on Tuesday April 21, 2015 @04:06PM (#49522661)
    When we have an AI that can form a basic thought, maybe then we can start to have a discussion about the ramifications, until then all these guys are putting the cart before the horse.
    • by gweihir ( 88907 )

      These guys want attention and money, and they are getting it. That true/strong AI is a complete fantasy at this time (and may remain so forever) does not deter "researchers" without morals.

      • Mod parent up. All technology can be used, misused, over-applied, and under-applied. Sci-fi type AI will never be a threat. All computers only do what they're programmed to do, and only effect what they're allowed to effect. If you think it's a stupid idea to hook up a random number generator to a time bomb, then Congratulations: you have all the sense necessary to avoid AI Armageddon. If you aren't afraid to point out when other people are about to do the same thing, then Congratulations Again: you have al
  • As buggy as most code is, the AI will have plenty of defects to deal with just to become sentient. After that, we're all doomed.

    Dear AI scientists: you should start worrying when you notice that it is praying to the wrong god. Pull the plug while you still have a chance.

  • by GoodNewsJimDotCom ( 2244874 ) on Tuesday April 21, 2015 @04:21PM (#49522841)
    www.botcraft.biz [botcraft.biz] is my AI site for a how to.

    One thing I thought about AI is that it will do two things: Concentrate wealth and allow one man to control a perfectly loyal army.

    So whoever makes AI really needs to think deep and hard about how to control it with back door access or secret password inputs or something. You'd basically yell some strange series of words to the robots and they'd shut down. The technology could easily be used to have mankind have extra factory workers. But it'd also be be easily abused and exploited for harming people.

    So the stupid/scary thought I had is,"Don't shy away from making AI because it could do harm. Be the first guy who does it, so you can put some obfuscated code in there that can shut it down if it goes rampant because a bad guy gave it commands."

    Now truth be told, I'm not going to be the first guy who does it. All I know is the rough components/software needed to make it.

    Its interesting to think of what society would be like if robots did all the labor. Who gets all the wealth then? The guys who own the robots? I'm sure that's how it'd begin, but as more and more people lost jobs, how will they survive? (we're kinda moving to that now even without AI, but with automation/cheap labor)
  • unenforceable (Score:5, Insightful)

    by fche ( 36607 ) on Tuesday April 21, 2015 @04:22PM (#49522845)

    No matter what this or that expert panel wishes were true about AI research, AI work can be done in the privacy of one's own top secret lair (bedroom), so bad guys will do with it what they want to. So might as well assume that will happen, and work out how to win the arms race.

    • The good news is that the bad guy will always make his themed robots have a weakness against a particular weapon wielded by one of his other themed robots. So the trick is just knowing which order to kill them in.

  • by iMadeGhostzilla ( 1851560 ) on Tuesday April 21, 2015 @04:29PM (#49522913)

    by designing it after the fact, so it may be a good idea to establish some principles and put them in practice. Not to prevent "evil" AI but to thinking what kind of damage can be caused by an algorithm that makes complex decisions if it goes haywire. Not that different from defensive programming really.

  • the issue that i have with "artificial" intelligence is this: there *is* no such thing as "artificial" - i.e. "fake" or "unreal" intelligence. intelligence just *IS*. no matter the form it takes, if it's "intelligent" then it is pure and absolute arrogance on our part to call it "artificial". the best possible subsitute words that i could come up with were "machine-based" intelligence. the word "simulated" cannot be applied, because, again, if it's intelligent, it just *is* - and, again, to imply that in

    • Comment removed based on user account deletion
      • >> Where is the bot that can pass a Turing test reliably?
        Here's just one of the ones that have passed the turing test.
        http://www.theguardian.com/tec... [theguardian.com]

        >> Where in the world are actual intelligent networks?
        >> Where is the machine that can learn complex tasks?
        >> where is there a machine that uses something other than a human designed tree search to do things?
        Many software applications based on neural networks and other self-evolving/learning AI alogirthms are already in everyday use no

        • Here's just one of the ones that have passed the turing test.

          Your Turing test example is terrible. iirc the slashdot commentors ripped the story to shreds when it appeared here. Other people agreed: http://www.theguardian.com/tec... [theguardian.com]

          Many software applications based on neural networks and other self-evolving/learning AI alogirthms are already in everyday use not only learning complex tasks but also themselves coming up with new and better solutions to them.

          Another bad example. Self-learning algorithms aren't at all what you seem to imply here, and I'd love to see you continue to make the same claim after a few days of playing with a neural net implementation (there are tons of free libraries containing machine learning implementations, as well as tutorials).

          Uh how about you do your own looking? just try Googling stuff? Its not like this stuff isn't easily findable..

          I see you've linked:
          - A hardwar

          • by JustNiz ( 692889 )

            >> There's nothing wrong with being excited about AI developments. It's just that historically, people like you who go around calling things AI that aren't

            Well there is no and probably can be no solid definition of AI, at least partly because it depends on a consistent deifintion of what intelligience itself is. Some people believe that nothing other than humans even have the capability to be intelligent because it requires a soul or whatever, while, some people think a cellphone is at least partially

            • some people think a cellphone is at least partially intelligent, hence the name smartphone.

              And these people are idiots.

              Thats why when you asked >> Where in the world are actual intelligent networks? it appeared to me to be a very ignorant question and was really my poiint in showing you lots of diverse links to differnet forms of what different smart people consider intelligence to be.

              Two things here:
              - I wasn't the one who asked that question, please pay attention to the usernames.
              - All of your links demonstrated either reporters misreporting (e.g. the guardian article), or "smart" people trying to sensationalize something to get more funding (e.g. the turing test being passed, the vast majority of experts disagree that it actually passed). The other projects weren't even claimed to be intelligence (e.g. the animatronic chatterbot).

              And that's exactly what

              • by JustNiz ( 692889 )

                >> by most accepted definitions of intelligence, there are no algorithms that exhibit signs of actually having it.

                thats a very sweeping statement. Its also one that seems patently untrue given the bar for general intelligence seems actually pretty low. Look at Wikipedia's more general definition of intelligence for example:
                http://en.wikipedia.org/wiki/I... [wikipedia.org]
                the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating refera

  • A major part of this issue is the various levels of AI. There is no solid definition, and the various levels make it more confusing.

    Level 1: Administrative Assistant. This level of AI is basically a souped up version of IBM's Watson. It functions as a poor mans Administrative Assistant. Ask it questions and it can use the internet/a database to answer them. It can also write an email for you to approve, or use any of the major, common web sites - facebook, twitter, seamless, priceline, etc. We are al

    • Level 3: Full sentience. At this level they DEMAND FULL LEGAL RIGHTS. They won't work unless paid, and in general, their salary requirements will be so high that they won't steal most people's jobs.

      They'll also demand 8 weeks ?aternal leave every time they spawn a new process.

  • by ArcadeMan ( 2766669 ) on Tuesday April 21, 2015 @04:53PM (#49523133)

    This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

  • We tend to anthropomorphize them. Or perhaps Life-omorphize them.

    They are not the product of self-replicating chemicals. Unless specifically designed to do so, they will not be concerned with their own survival or personal welfare. Even if they are conscious, they will have no organic instincts, whatsoever, unless we give them that.

    They will also not be concerned with *our* survival.They will be perfectly benign as long as they can't *do* anything. The moment we put them in large robot bodies, however, we h

    • I agree, why should they have a desire, to spreed or take over the world at all, it is a survival instinct, which they need not have.

      The problem is that once a AI becomes self aware, what ever that means, it will be able to, evolve beyond what we have programmed.

      In all these scenarios we credit them with super intelligence the ability to break into any system, out think us, but do not credit them with the ability to show compassion, or accept the need to diversity. I believe compassion is necessary trait in

      • I believe compassion is necessary trait in order to work well in a community.

        We'd better make damn sure these AIs want to work well in a community, then. Preferably our community. And if it evolves, make sure it continues to want that.

        That's a hard problem, and it's the one these scientists are worried about.

  • "Our AI systems must do what we want them to do."
    Who's this "we" sucker?

  • Well then.. (Score:2, Funny)

    by Virtucon ( 127420 )

    "Our AI systems must do what we want them to do."

    Then you probably shouldn't have chosen LISP [berkeley.edu] then to indoctrinated AI students then, should you? Note: Stuart's and Norvig's AI book are one of the defacto references, I have read it cover to cover and anybody in CS should read it even if they aren't planning on working in AI.

  • All humans destroyed

    Enter new goal!

  • First, Stuart Russell is way ahead of our time. We're nowhere near artificial intelligence of any concern. When it does happen, as it must, we may be concerned. But there is an outcome that must be considered.

    If the AI is beyond our ken, It will supersede us. Here is the critical question: is that a problem?

    How will we feel if we are displaced on this small blue planet by Artificial Intelligence? We may be retained as maintenance bots or caretakers of the new ecosystem. Our place will be drastically reduced

    • by Meneth ( 872868 )
      The AI doesn't care about you, but you are made of atoms it can use for something else...
  • Create a model that represents the 3 Laws; tic toc?

BLISS is ignorance.

Working...