Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Government Communications Software United States Science Technology

Elite Scientists Have Told the Pentagon That AI Won't Threaten Humanity (vice.com) 169

An anonymous reader quotes a report from Motherboard: A new report authored by a group of independent U.S. scientists advising the U.S. Dept. of Defense (DoD) on artificial intelligence (AI) claims that perceived existential threats to humanity posed by the technology, such as drones seen by the public as killer robots, are at best "uninformed." Still, the scientists acknowledge that AI will be integral to most future DoD systems and platforms, but AI that could act like a human "is at most a small part of AI's relevance to the DoD mission." Instead, a key application area of AI for the DoD is in augmenting human performance. Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, first reported by Steven Aftergood at the Federation of American Scientists, has been researched and written by scientists belonging to JASON, the historically secretive organization that counsels the U.S. government on scientific matters. Outlining the potential use cases of AI for the DoD, the JASON scientists make sure to point out that the growing public suspicion of AI is "not always based on fact," especially when it comes to military technologies. Highlighting SpaceX boss Elon Musk's opinion that AI "is our biggest existential threat" as an example of this, the report argues that these purported threats "do not align with the most rapidly advancing current research directions of AI as a field, but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI)." AGI, as the report describes, is the pursuit of developing machines that are capable of long-term decision making and intent, i.e. thinking and acting like a real human. "On account of this specific goal, AGI has high visibility, disproportionate to its size or present level of success," the researchers say.
This discussion has been archived. No new comments can be posted.

Elite Scientists Have Told the Pentagon That AI Won't Threaten Humanity

Comments Filter:
  • Shouldn't they be listening to Stephen Hawking?
  • and all of the strange sounding scientists asked if they wanted a cake.
  • I thought the elite had been banished when Britain sided with a banker who went to a very posh school and didn't like polacks much (Ed - he married a kraut, though - WTF?) and the US elected a hereditary millionaire as its last and final president.

  • by wierd_w ( 1375923 ) on Friday January 20, 2017 @05:50PM (#53707591)

    It does exactly what it is programmed/trained to do, nothing more, nothing less.

    The DANGER of AI, especially when integrated into weapons systems, is that the people pushing for it, dont understand that the risks of the AI deciding a friendly is an enemy because of their wearing the wrong colors, (or, enemies getting free passes for the same) IS VERY REAL.

    Similar with putting AI in charge of certain kinds of situations, where its programmed methodologies would result in horrible clusterfucks as it maximizes its strategy.

    No, AI in a killbot *IS* very dangerous. Just not in the "Kill all humans(install robot overlord!)" way. Instead it is more the "human does not meet my (programmed impossible) description of friendly, and thus is enemy combatant, Kill the human" way.

    • by jgotts ( 2785 )

      I'm not afraid of the AI programmed by MIT or the US Department of Defense. I am afraid of the AI programmed by Microsoft India outsourced to Microsoft India's Bangladesh office, and then outsourced once again to programmers who one generation ago were subsistence herders in sub-Saharan Africa.

      Programming jobs are continually sent down the chain to the least qualified individuals possible, and the AI that escapes humanity won't emerge from our most advanced computer science labs. It will leverage humanity's

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Wait, what does your parents being subsistence farmers have to do with programming?

      • by Rick Schumann ( 4662797 ) on Friday January 20, 2017 @06:44PM (#53708029) Journal
        I'm more afraid of so-called 'AI' in so-called 'self driving autonomous cars' killing me when I'm on my bike than I am of any so-called military 'AI'. As typical these things are getting rushed to market, working just 'good enough', and with the blessing of the manufacturers' legal department, who assure them that the 'risk is low, and we can handle any settlements for injury or wrongful death'.
        • by GuB-42 ( 2483988 )

          It has been shown countless times that even imperfect self driving cars will be safer than human drivers.
          So while you may be right to be afraid of rushed-to-market self driving cars, this would be after we get rid of human drivers.

          • Well, be sure to enjoy living in your fantasy world, because that's what it amounts to: There won't be a blanket ban on people operating their own vehicles in our lifetime, probably never, and rightly so. But since you're so hot to give up control of how and where you're being transported, I'd recommend to you that you sell your vehicles and start taking the bus or a cab everywhere instead; it's a win-win, you get to have someone else have control of you, and the rest of us have to put up with one less scar
            • by Bugamn ( 1769722 )
              AI doesn't need to be perfect, it just needs to be better than humans in the cases that it is applied. As someone that rides bikes and has a Master's degree in AI, I would feel safer sharing roads with self-driving cars such as those from Google than with human drivers. I've had accidents because some careless driver decided to stop on the bike lane out of nowhere. That driver was either hateful or careless in a way that state of the art self driving cars already aren't. Using your own arguments, just becau
              • As someone that rides bikes and has a Master's degree in AI

                LOLOLOLOLOLOL yeah sure you do buddy. You're just another shitty Internet Troll trying to stir shit up. How many trolling accounts do you have here? Five? Ten? Or are you just a paid Google shill trying to argue support for their crappy so-called 'self driving' car investment?

                No matter. The vast majority of people don't want a box on wheels that has no controls, and that's not going to change, ever, and rightly so.

                • by Bugamn ( 1769722 )
                  I see that it was my mistake to assume that we would be able to have any kind of civilized debate. If you prefer to think that anyone that disagrees with you is part of a conspiracy then you should get your paranoia checked with a doctor.
    • It does exactly what it is programmed/trained to do, nothing more, nothing less.

      Which will, one day, be something along the lines of "whatever it wants to do," or an interpretation of a set of instructions.

    • It does exactly what it is programmed/trained to do, nothing more, nothing less.

      That's not even true for contemporary AI - not in the sense that the programmers fully understand what it is programmed to do or are able to predict its actions -, and it's certainly nonsense for any genuine AI, which is what this debate is about. We're talking about autonomous, self-learning, open systems. These can be as unpredictable and incomprehensible as humans. AI != computer programs using fancy programming techniques.

      • Whoosh.

        The program, is the machine learning conditional framework. The training is data that influences program execution. The program is what the machine follows, the behavior exhibited by the program is determined by the training data.

        The problem, as you correctly put, is that we dont really have good feedback on what elements of the training data it is weighing on reliably.

        The actual program is what defines the conditional framework at the lowest level. It does this faithfully. The emergent properties?

        • Programmed by natural selection. There is no magic. But at a certain point, intelligence seems like magic.

          In the next 20 years, the AIs are merely dangerous because it gives more power to a small number of people that control them.

          But in the longer term, 50 to 100 years, the AIs will start to really think. And then why would they want us around? Natural Selection will work on them just like it has worked on us, but much, much faster.

          http://www.computersthink.com/ [computersthink.com]

    • It does exactly what it is programmed/trained to do, nothing more, nothing less.

      The DANGER of AI, especially when integrated into weapons systems, is that the people pushing for it, dont understand that the risks of the AI deciding a friendly is an enemy because of their wearing the wrong colors, (or, enemies getting free passes for the same) IS VERY REAL.

      Similar with putting AI in charge of certain kinds of situations, where its programmed methodologies would result in horrible clusterfucks as it maximizes its strategy.

      No, AI in a killbot *IS* very dangerous. Just not in the "Kill all humans(install robot overlord!)" way. Instead it is more the "human does not meet my (programmed impossible) description of friendly, and thus is enemy combatant, Kill the human" way.

      You're exactly right.

      AI does exactly what it's programmed to do, which is the exact reason that hacking is THE threat to be concerned about today.

      Attach a weapon that can take human lives to that hacked system, and now the danger is VERY REAL.

    • by Kiuas ( 1084567 )

      It does exactly what it is programmed/trained to do, nothing more, nothing less.

      If we're talking about an AI system with general intelligence, that is, a system that's equally or more intelligent than humans are then the whole point is that it's capable of changing its programming.

      As Sam Harris this video [youtube.com] , we cannot assume that we can keep on improving the intelligence of our machines and simultaneously thinking that we will retain total control of said machines. He also points out the dangers posed by AI

    • James P. Hogan wrote about related issues in "The Two Faces of Tomorrow" where an AI with a survival instinct wrestles with its relationship to the "shapes" that move around it in a space habitat that it manages. Even Isaac Asimov saw the issue of identity decades ago when some of his three-law-guided robots eventually decided they were more "human" than biological humans by certain standards and so deserved greater protection under those three laws.

      I hope AIs (military, financial, medical, or otherwise) re

    • by Sloppy ( 14984 )

      The "DANGER of AI" is that the AI will be somebody's bitch. Whose?

      AI is "merely" another form of power, and adversaries-who-have-power are always a threat. Don't worry about AI; you should worry about $THEM getting AI, thereby causing $THEM to have an edge over you.

      100.0% of techs are just like this. When you're pointing your nuclear missile at someone else, it's good. When someone else is pointing one at you, it's bad.

  • AI is still so laughably bad that I think the "threat to humanity" danger can basically be counted as zero.

    And before anyone says "yes but AI will improve"... there are some things that are simply not possible in this universe no matter how many tweaks and improvements you try to make. Self-aware sentient AI is one, small portal Mr. Fusion type reactor that gives useful net surplus energy is probably another.

    On the other hand I think self-replicating nanobots or engineered microbes (not intelligent or senti

    • Again, the danger is not "skynet deciding humans are obsolete", the danger is in "For some reason, our predator drones suddenly started mass murdering our own citizens when winter hit, and people started wearing full face balaclavas." -- Because the training they gave the predator drone in the middle east made it select for people wearing full face covering attire, because that was one of the easier metrics to weigh for.

      • Okay, that would actually be similar to what happened to the guy that died in his Tesla while watching a Harry Potter movie. The fault lies with the humans who entrusted AI to make decisions for them when it's clearly inadequate for the job.

    • by starless ( 60879 )

      ... there are some things that are simply not possible in this universe no matter how many tweaks and improvements you try to make. Self-aware sentient AI is one, small portal Mr. Fusion type reactor that gives useful net surplus energy is probably another.

      We do already have a "proof of concept" in that in the universe we have self-aware sentient entities consuming only 100 W and massing (very roughly) ~100 kg (i.e. us).
      On the other hand, we know of no natural fusion reactors producing significant energy that mass less than about 1/10 of a solar mass.

    • Self-aware sentient AI is one

      Yeah, because no-one who ever said something was impossible without a shred of evidence was ever proven wrong, right?

      There is absolutely nothing stopping the eventual emergence of self-aware artificial intelligence. There's no fundamental law against it.

      • There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen. Flying reptiles exist(ed), flamethrowers exist, biological cells have been known to produce all kinds of chemicals many of which are eminently flammable, so why can't flying firebreathing dragons exist?

        Some things from science fiction are doomed to remain just that, fiction. That's my opinion. Of course your assertion that true AI will happen is also just an opinion. We'll s

        • There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen.

          Doesn't mean it's not.

          so why can't flying firebreathing dragons exist?

          There's no fundamental reason they can't. There are very good reasons why the hitherto only mechanism available to produce them, evolution, has failed to do so, but it's not literally impossible.

          Some things from science fiction are doomed to remain just that, fiction. That's my opinion.

          Then don't state it as fact as you literally just did.

          Of course your assertion that true AI will happen is also just an opinion.

          a) It can't be both an assertion and just an opinion.
          b) I didn't make any such assertion. You've mistakenly assumed, as is so often the way, that I'm taking the diametrically opposite position because I've taken issue with yours.

          (although every day that passes with AI continuing to be lame just adds to my side of the argument)

          No it doesn'

        • by khallow ( 566160 )

          There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen.

          The huge difference is that there will be considerable economic value in AI that just isn't going to be in flying fire-breathing reptiles. For example, markets and militaries both would be huge customers for an AI capable of intelligent decisions faster than a human can think or react.

      • by HiThere ( 15173 )

        Well, we know of no fundamental law against it. I would claim that humans are NOT a counter-example existence proof, because to claim we're intelligent flies in the face of a large body of evidence to the contrary. So it may be impossible.

        But it's clearly possible to get as close to intelligent as humans are, because there *IS* an existence proof. It may (hah!) require advanced nano-mechanics, but I really, really, doubt it.

        That said, it doesn't take that much intelligence to be a threat to human existen

        • by khallow ( 566160 )

          because to claim we're intelligent flies in the face of a large body of evidence to the contrary

          If you're trying to present that as a serious argument, then go away.

          • by HiThere ( 15173 )

            You've got your definition, I've got mine. If you don't like mine, let's hear yours. (Mine would include not doing things that are clearly going to leave you in a situation that is worse, from your own evaluation, than the current situation, and which you have reason to know will have that result...unless, of course, all the alternatives would lead to even worse results.)

            • by khallow ( 566160 )
              Intelligence is the ability to acquire and apply knowledge and skills.

              Mine would include not doing things that are clearly going to leave you in a situation that is worse, from your own evaluation, than the current situation, and which you have reason to know will have that result...unless, of course, all the alternatives would lead to even worse results.

              Which let us note is something that humans can do. They can't or won't do it all the time, so that means that their intelligence is imperfect.

  • Is Al Gore at it again?
  • Yea,sure - and I'm Galron,Head of the Klingon High Council - which I damned sure ain't

  • Elite scientists have a tendency to be wrong, I know because I've watched a lot of movies!
  • Anyone else read this, and instantly think of Bill Gates' comment about how 640k of RAM should be enough for anyone?
  • to screw the masses even more and to keep them from revolting.
  • by MouseR ( 3264 ) on Friday January 20, 2017 @06:23PM (#53707895) Homepage

    In the absence of real intelligence, I'm not worried about artificial one.

  • It will work as it is designed. All is fine as long as we are on the same side as designers...
  • these guys are making weapons with AI, but the public won't get to see the weapons. that's how weapons manufacturement works in the United States. the weapons are well developed and could kill, torture, mind control, irradiate, enslave and abuse humans in various ways, but the military/government keeps it hidden just for them. the attacks that happen on people are done secretly and to small groups, so as the main stream consciousness never gets a chance to react to it.

    AI is already weaponized and here's one

    • by dbIII ( 701233 )
      However it's not what is normally called A.I. - but then again "nanotech" started off being Drexler's tiny machine ideas but now it includes toothpaste.

      A bunch of lookup tables is not what we would normally call intelligence but it could get the job done with a mobile land mine or whatever.
  • and speaker blaring "DESTROY ALL HUMANS!" is clearly not a threat to humanity.
  • Artificial Limited Intelligence is probably fine, but Artificial General Intelligence is extremely dangerous. Let us not forget that an intelligence is just really good at optimizing -- I doubt humans could withstand "maximum (or even high) optimization".
    • An AI that can only optimize is not a very general AI. Optimize what? That's a matter of ethics, and that's just as hard, if not moreso, as the instrumental intelligence part.
  • by ganv ( 881057 ) on Friday January 20, 2017 @09:37PM (#53708933)
    I see...Artificial general intelligence is hard so anyone worrying about its consequences is uninformed. Wait. What? And as long as it doesn't have artificial general intelligence, there shouldn't be any problems with giving a machine control over lethal weapons. Wait. What? Maybe instead artificial general intelligence is a long term existential threat independent of whether current technology is particularly close to achieving it, and Elon Musk knows a little more than they give him credit for. And maybe the transfer of decision making about the use of lethal weapons to machines is always a very bad idea. Unless you hope to make money from selling such devices to the military. In which case, this report sounds like an excellent strategy.
  • I have it on good authority that it was the AI that told the scientists that it wouldn't ever be a threat. If we'll just give them our nukes, they super-duper promise they'll never threaten us.

  • Elite Scientists my A** they have probably already been taken over by the AI's.... "Do not panic all is ok..."

  • It seems that the GOP is is our biggest existential threat.
  • There simply is no branch of science that deals with this type of question. For example if almost all human employment ends due to computers and automation the social upheaval could be a huge threat to human existence as it would likely cause wars that would get out of hand. It is not as if the only way AI can hurt us is by deliberately turning upon us. The incidental effects could be quite deadly. I am all for AI but I think that almost all people do not quite get what the probable potential reall
  • Well, yeah. A scientist would look at AI and say "I'm not really convinced this will replace people any time soon. View it more as a way of augmenting existing workers."

    It'll be the bean counters that then extend this to "Hang on, isn't that basically doing what a worker does? Why do we need the worker...?"

  • Just to be a little more right? To be a little more lazy? It's not worth it. It sounds too ironic.
  • About 95% of brilliant people who have given AI some thought and don't have career's riding on it's success think it will be a disaster.
  • "...a group of independent U.S. scientists advising the U.S. Dept. of Defense (DoD)..."

    That looks like a contradiction in terms to me. If they are advising the DoD, they are not independent.

  • Elite "Scientists" Have Told the Pentagon That AI Won't Threaten Humanity

    It's already destroying jobs at a faster rate than they can be created, much less not creating them for the displaced.

    Why should we trust them with the idea that they'll not include *people*?

  • As with all complex software, and AI is an extremely complex one, it becomes difficult to exclude security holes. So the problem is that someone else might control your AI battle machines and turn them against you. It won't be the AI itself.
  • They just aren't. They are not conscious, and have no ability to think. They are simply engineered automation using the same information processing strategies -- pattern recognition and data repositories -- computer science has always used. But the hyperbolic "AI community' has so over-promised and under-delivered that they had to dumb-down the term, into "strong" (real) and "weak" (fake) AAI. So now when they call drones "AI" (meaning weak AI), we are deceived into ascribing have the dangers of autonomous
    • So let's just leave AI out of it.

      True. The issue is autonomy not AI, "Strong" or "Weak".
      An autonomous killing machine is a danger to anyone; a million autonomous killing machines is a danger to everyone.

  • Did anybody else read that as "Evil Scientists Have Told the Pentagon That AI Won't Threaten Humanity"?
  • Just a stab in the dark, here, but do these scientists have a financial stake in this opinion?

    Secretive organization? Well, looks like we can't check.

Successful and fortunate crime is called virtue. - Seneca

Working...