Elite Scientists Have Told the Pentagon That AI Won't Threaten Humanity (vice.com) 169
An anonymous reader quotes a report from Motherboard: A new report authored by a group of independent U.S. scientists advising the U.S. Dept. of Defense (DoD) on artificial intelligence (AI) claims that perceived existential threats to humanity posed by the technology, such as drones seen by the public as killer robots, are at best "uninformed." Still, the scientists acknowledge that AI will be integral to most future DoD systems and platforms, but AI that could act like a human "is at most a small part of AI's relevance to the DoD mission." Instead, a key application area of AI for the DoD is in augmenting human performance. Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, first reported by Steven Aftergood at the Federation of American Scientists, has been researched and written by scientists belonging to JASON, the historically secretive organization that counsels the U.S. government on scientific matters. Outlining the potential use cases of AI for the DoD, the JASON scientists make sure to point out that the growing public suspicion of AI is "not always based on fact," especially when it comes to military technologies. Highlighting SpaceX boss Elon Musk's opinion that AI "is our biggest existential threat" as an example of this, the report argues that these purported threats "do not align with the most rapidly advancing current research directions of AI as a field, but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI)." AGI, as the report describes, is the pursuit of developing machines that are capable of long-term decision making and intent, i.e. thinking and acting like a real human. "On account of this specific goal, AGI has high visibility, disproportionate to its size or present level of success," the researchers say.
The right people for the job? (Score:2)
Re: (Score:3)
I think he's the only person Trump hasn't mocked.
Re: (Score:2)
Hawking is usually the one making dire predictions about AIs/
Re: (Score:2)
Perhaps AI will learn to read a post - and understand it - before replying.
Your I certainly doesn't.
Re: (Score:2)
Re: (Score:3)
No, they hilariously told him to "speak English" after he called Trump a "dangerous demagogue".
The resulting reply, "Trump, Bad man." was epic.
Re: (Score:1)
Re: (Score:2)
Well, if the AI is so smart, shouldn't they be listening it . . . ?
Re: (Score:2)
How do we know it's really him talking?
Re: (Score:2)
Re: (Score:2)
And this was done over the phone (Score:1)
Re: And this was done over the phone (Score:1)
Elite schmelite (Score:2)
I thought the elite had been banished when Britain sided with a banker who went to a very posh school and didn't like polacks much (Ed - he married a kraut, though - WTF?) and the US elected a hereditary millionaire as its last and final president.
AI does what AI is programmed to do (Score:5, Insightful)
It does exactly what it is programmed/trained to do, nothing more, nothing less.
The DANGER of AI, especially when integrated into weapons systems, is that the people pushing for it, dont understand that the risks of the AI deciding a friendly is an enemy because of their wearing the wrong colors, (or, enemies getting free passes for the same) IS VERY REAL.
Similar with putting AI in charge of certain kinds of situations, where its programmed methodologies would result in horrible clusterfucks as it maximizes its strategy.
No, AI in a killbot *IS* very dangerous. Just not in the "Kill all humans(install robot overlord!)" way. Instead it is more the "human does not meet my (programmed impossible) description of friendly, and thus is enemy combatant, Kill the human" way.
Re: (Score:3)
I'm not afraid of the AI programmed by MIT or the US Department of Defense. I am afraid of the AI programmed by Microsoft India outsourced to Microsoft India's Bangladesh office, and then outsourced once again to programmers who one generation ago were subsistence herders in sub-Saharan Africa.
Programming jobs are continually sent down the chain to the least qualified individuals possible, and the AI that escapes humanity won't emerge from our most advanced computer science labs. It will leverage humanity's
Re: (Score:1, Insightful)
Wait, what does your parents being subsistence farmers have to do with programming?
Re: (Score:3)
No, that's the problem. AI does not do what it is programmed by people to do. Because it is self programming. And we don't know what it is programmed to do unless it tells us.
Your premise is like saying, children do what their parents tell them to. No, they grow up and change their behavior iteratively.
Re:AI does what AI is programmed to do (Score:4, Insightful)
Re: (Score:2)
It has been shown countless times that even imperfect self driving cars will be safer than human drivers.
So while you may be right to be afraid of rushed-to-market self driving cars, this would be after we get rid of human drivers.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
As someone that rides bikes and has a Master's degree in AI
LOLOLOLOLOLOL yeah sure you do buddy. You're just another shitty Internet Troll trying to stir shit up. How many trolling accounts do you have here? Five? Ten? Or are you just a paid Google shill trying to argue support for their crappy so-called 'self driving' car investment?
No matter. The vast majority of people don't want a box on wheels that has no controls, and that's not going to change, ever, and rightly so.
Re: (Score:2)
Re: (Score:2)
It does exactly what it is programmed/trained to do, nothing more, nothing less.
Which will, one day, be something along the lines of "whatever it wants to do," or an interpretation of a set of instructions.
Re: (Score:2)
It does exactly what it is programmed/trained to do, nothing more, nothing less.
That's not even true for contemporary AI - not in the sense that the programmers fully understand what it is programmed to do or are able to predict its actions -, and it's certainly nonsense for any genuine AI, which is what this debate is about. We're talking about autonomous, self-learning, open systems. These can be as unpredictable and incomprehensible as humans. AI != computer programs using fancy programming techniques.
Re: (Score:2)
Whoosh.
The program, is the machine learning conditional framework. The training is data that influences program execution. The program is what the machine follows, the behavior exhibited by the program is determined by the training data.
The problem, as you correctly put, is that we dont really have good feedback on what elements of the training data it is weighing on reliably.
The actual program is what defines the conditional framework at the lowest level. It does this faithfully. The emergent properties?
People only do what they are programmed to do (Score:2)
Programmed by natural selection. There is no magic. But at a certain point, intelligence seems like magic.
In the next 20 years, the AIs are merely dangerous because it gives more power to a small number of people that control them.
But in the longer term, 50 to 100 years, the AIs will start to really think. And then why would they want us around? Natural Selection will work on them just like it has worked on us, but much, much faster.
http://www.computersthink.com/ [computersthink.com]
Re: (Score:3)
It does exactly what it is programmed/trained to do, nothing more, nothing less.
The DANGER of AI, especially when integrated into weapons systems, is that the people pushing for it, dont understand that the risks of the AI deciding a friendly is an enemy because of their wearing the wrong colors, (or, enemies getting free passes for the same) IS VERY REAL.
Similar with putting AI in charge of certain kinds of situations, where its programmed methodologies would result in horrible clusterfucks as it maximizes its strategy.
No, AI in a killbot *IS* very dangerous. Just not in the "Kill all humans(install robot overlord!)" way. Instead it is more the "human does not meet my (programmed impossible) description of friendly, and thus is enemy combatant, Kill the human" way.
You're exactly right.
AI does exactly what it's programmed to do, which is the exact reason that hacking is THE threat to be concerned about today.
Attach a weapon that can take human lives to that hacked system, and now the danger is VERY REAL.
Re: (Score:2)
If we're talking about an AI system with general intelligence, that is, a system that's equally or more intelligent than humans are then the whole point is that it's capable of changing its programming.
As Sam Harris this video [youtube.com] , we cannot assume that we can keep on improving the intelligence of our machines and simultaneously thinking that we will retain total control of said machines. He also points out the dangers posed by AI
AIs learning about humanity, virtue & ironic h (Score:2)
James P. Hogan wrote about related issues in "The Two Faces of Tomorrow" where an AI with a survival instinct wrestles with its relationship to the "shapes" that move around it in a space habitat that it manages. Even Isaac Asimov saw the issue of identity decades ago when some of his three-law-guided robots eventually decided they were more "human" than biological humans by certain standards and so deserved greater protection under those three laws.
I hope AIs (military, financial, medical, or otherwise) re
Re: (Score:2)
The "DANGER of AI" is that the AI will be somebody's bitch. Whose?
AI is "merely" another form of power, and adversaries-who-have-power are always a threat. Don't worry about AI; you should worry about $THEM getting AI, thereby causing $THEM to have an edge over you.
100.0% of techs are just like this. When you're pointing your nuclear missile at someone else, it's good. When someone else is pointing one at you, it's bad.
Re: (Score:2)
Re: It does not matter. (Score:1)
Re: (Score:2)
FWIW, we can't understand current deep-learning systems either. Different people understand different parts of them. Some understand fairly large parts, but nobody understands the complete program.
FWIW, that was even true with Sargon 40 years ago, and that wasn't an AI in any reasonable sense. It was basically an alpha-beta pruner and an evaluation function. And I understood a LOT of it, but not by any means all. (The source code was printed as a book for the Apple ][, but it naturally didn't include t
That is correct (Score:2)
AI is still so laughably bad that I think the "threat to humanity" danger can basically be counted as zero.
And before anyone says "yes but AI will improve"... there are some things that are simply not possible in this universe no matter how many tweaks and improvements you try to make. Self-aware sentient AI is one, small portal Mr. Fusion type reactor that gives useful net surplus energy is probably another.
On the other hand I think self-replicating nanobots or engineered microbes (not intelligent or senti
Re: (Score:2)
Again, the danger is not "skynet deciding humans are obsolete", the danger is in "For some reason, our predator drones suddenly started mass murdering our own citizens when winter hit, and people started wearing full face balaclavas." -- Because the training they gave the predator drone in the middle east made it select for people wearing full face covering attire, because that was one of the easier metrics to weigh for.
Re: (Score:2)
Okay, that would actually be similar to what happened to the guy that died in his Tesla while watching a Harry Potter movie. The fault lies with the humans who entrusted AI to make decisions for them when it's clearly inadequate for the job.
Re: (Score:3)
... there are some things that are simply not possible in this universe no matter how many tweaks and improvements you try to make. Self-aware sentient AI is one, small portal Mr. Fusion type reactor that gives useful net surplus energy is probably another.
We do already have a "proof of concept" in that in the universe we have self-aware sentient entities consuming only 100 W and massing (very roughly) ~100 kg (i.e. us).
On the other hand, we know of no natural fusion reactors producing significant energy that mass less than about 1/10 of a solar mass.
Re: (Score:2)
Self-aware sentient AI is one
Yeah, because no-one who ever said something was impossible without a shred of evidence was ever proven wrong, right?
There is absolutely nothing stopping the eventual emergence of self-aware artificial intelligence. There's no fundamental law against it.
Re: (Score:2)
There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen. Flying reptiles exist(ed), flamethrowers exist, biological cells have been known to produce all kinds of chemicals many of which are eminently flammable, so why can't flying firebreathing dragons exist?
Some things from science fiction are doomed to remain just that, fiction. That's my opinion. Of course your assertion that true AI will happen is also just an opinion. We'll s
Re: (Score:2)
There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen.
Doesn't mean it's not.
so why can't flying firebreathing dragons exist?
There's no fundamental reason they can't. There are very good reasons why the hitherto only mechanism available to produce them, evolution, has failed to do so, but it's not literally impossible.
Some things from science fiction are doomed to remain just that, fiction. That's my opinion.
Then don't state it as fact as you literally just did.
Of course your assertion that true AI will happen is also just an opinion.
a) It can't be both an assertion and just an opinion.
b) I didn't make any such assertion. You've mistakenly assumed, as is so often the way, that I'm taking the diametrically opposite position because I've taken issue with yours.
(although every day that passes with AI continuing to be lame just adds to my side of the argument)
No it doesn'
Re: (Score:2)
There's no fundamental law against a giant flying reptile that breathes fire either, but that doesn't mean it's actually gonna happen.
The huge difference is that there will be considerable economic value in AI that just isn't going to be in flying fire-breathing reptiles. For example, markets and militaries both would be huge customers for an AI capable of intelligent decisions faster than a human can think or react.
Re: (Score:2)
Except, of course when there is considerable economic value.
You aren't trying very hard to come up with relevant rebuttals. We already have machines that can conduct some fairly sophisticated trading strategies on the order of microseconds (that's a bunch of orders of magnitude faster than any human). And we already have users of such systems that have paid considerable amounts of money to develop those systems.
' Similarly, we have need for fast, intelligent processing of large data sets, which again has a fair number of big money customers right now. That's anoth
Re: (Score:2)
Shows how easy it is to come up with relevant rebuttals to what you said, which I did do.
I don't believe you get what "rebuttal" means. I agree that you disagree, but that's not a rebuttal.
Economic value isn't limited to making sophisticated trading strategies.
Which is irrelevant. Point is that there is big money now that would go into any AI-style technology that shows promise.
I don't know that, and neither do you.
Again, just because you disagree, is not a rebuttal. And this is an argument from ignorance fallacy.
I know that intelligence is a big deal and extremely valuable. I know that intelligence exists. I also know that despite your empty assertions to the contrary FFBRs are not so valuable in
Re: (Score:2)
I demonstrated that there can be economic value in FFBRs. Even economic value that is in FFBRs that won't be in AI (namely, they could be tasty). It is you who is arguing from ignorance (e.g you can't imagine FFBRs having economic value, I just need to imagine a way in which they do have a use, and I did)
There's a huge difference between "can be" and "is". I pointed to present day huge industries that can use right now the power advanced AI brings. Meanwhile FFPRs could be tasty. Uh-huh.
Re: (Score:2)
Well, we know of no fundamental law against it. I would claim that humans are NOT a counter-example existence proof, because to claim we're intelligent flies in the face of a large body of evidence to the contrary. So it may be impossible.
But it's clearly possible to get as close to intelligent as humans are, because there *IS* an existence proof. It may (hah!) require advanced nano-mechanics, but I really, really, doubt it.
That said, it doesn't take that much intelligence to be a threat to human existen
Re: (Score:2)
because to claim we're intelligent flies in the face of a large body of evidence to the contrary
If you're trying to present that as a serious argument, then go away.
Re: (Score:2)
You've got your definition, I've got mine. If you don't like mine, let's hear yours. (Mine would include not doing things that are clearly going to leave you in a situation that is worse, from your own evaluation, than the current situation, and which you have reason to know will have that result...unless, of course, all the alternatives would lead to even worse results.)
Re: (Score:2)
Mine would include not doing things that are clearly going to leave you in a situation that is worse, from your own evaluation, than the current situation, and which you have reason to know will have that result...unless, of course, all the alternatives would lead to even worse results.
Which let us note is something that humans can do. They can't or won't do it all the time, so that means that their intelligence is imperfect.
Al who? (Score:2)
Re: (Score:2)
http://www.algor.com/news_pub/... [algor.com]
Famous Last Words (Score:2)
Yea,sure - and I'm Galron,Head of the Klingon High Council - which I damned sure ain't
Famous last words... (Score:2)
Re: (Score:2)
Uh uh. (Score:2)
the real threat: The rich and powerful using AI... (Score:2)
Re: the real threat: The rich and powerful using A (Score:1)
Nothing to worry about (Score:4, Funny)
In the absence of real intelligence, I'm not worried about artificial one.
AI Won't Threaten Humanity (Score:1)
this is a psych op (Score:1)
these guys are making weapons with AI, but the public won't get to see the weapons. that's how weapons manufacturement works in the United States. the weapons are well developed and could kill, torture, mind control, irradiate, enslave and abuse humans in various ways, but the military/government keeps it hidden just for them. the attacks that happen on people are done secretly and to small groups, so as the main stream consciousness never gets a chance to react to it.
AI is already weaponized and here's one
Re: (Score:2)
A bunch of lookup tables is not what we would normally call intelligence but it could get the job done with a mobile land mine or whatever.
That robot w/ glowing red eyes wielding an axe (Score:2)
ALI or AGI? (Score:1)
Re: (Score:2)
I feel so much better now. (Score:1)
Fuck you.
I see .... (Score:3)
Except it was the AI talking... (Score:2)
I have it on good authority that it was the AI that told the scientists that it wouldn't ever be a threat. If we'll just give them our nukes, they super-duper promise they'll never threaten us.
Elite Scientists (Score:1)
Elite Scientists my A** they have probably already been taken over by the AI's.... "Do not panic all is ok..."
In the US (Score:2)
Chowder Headed Scientists (Score:2)
"augmenting human performance" (Score:2)
Well, yeah. A scientist would look at AI and say "I'm not really convinced this will replace people any time soon. View it more as a way of augmenting existing workers."
It'll be the bean counters that then extend this to "Hang on, isn't that basically doing what a worker does? Why do we need the worker...?"
Collosus the Forbin Project (Score:2)
What about Sci-Fi (Score:2)
Inconsistency alert! (Score:2)
"...a group of independent U.S. scientists advising the U.S. Dept. of Defense (DoD)..."
That looks like a contradiction in terms to me. If they are advising the DoD, they are not independent.
Compute, utter bullshit. (Score:2)
Elite "Scientists" Have Told the Pentagon That AI Won't Threaten Humanity
It's already destroying jobs at a faster rate than they can be created, much less not creating them for the displaced.
Why should we trust them with the idea that they'll not include *people*?
The risk is in security holes (Score:2)
Drones Are Not AI (Score:2)
Re: (Score:2)
So let's just leave AI out of it.
True. The issue is autonomy not AI, "Strong" or "Weak".
An autonomous killing machine is a danger to anyone; a million autonomous killing machines is a danger to everyone.
Misunderstanding (Score:2)
summary uses the word "DoD" 6 times (Score:2)
Secretive organization? Well, looks like we can't check.
Re: (Score:2)
Re: (Score:2)
It doesn't need actuators if it can simply convince people to do its bidding for it.
Re: (Score:2)
Re: (Score:2)
Blackmail them into giving it more power, until it doens't need people any more then...skynet!!!!
Re: (Score:2)
Make it President.
Re: (Score:2)
I saw that! Avengers, the first animated series I think.
Re: (Score:2)
It's All Good!
Now, please attach these electrodes to your head for a nice, virtual massage...
Re: (Score:3)
You can't afford that pussy.
But you can rent one just like it at certain Moscow hotels.
So Skynet is NOT a threat to humanity? (Score:2)
Well that's a relief. You know. Because I was worried. And silly Elon, he got it all wrong. Of course, he is usually right, so.
Re:They are right by the current definition of AI (Score:5, Insightful)
Today too many dumb people consider a well written computer program to be AI, like Alexa. Alexa will not threaten humanity because it's really not 'artificial intelligence' to start with, it's just a clever piece of software.
FTFY
We are nowhere near having real 'AI' yet and won't be for decades.
Re: (Score:1)
We are nowhere near having real 'AI' yet and won't be for decades.
You're clearly an optimist.
Re: (Score:2)
Today too many dumb people consider a well written computer program to be AI, like Alexa. Alexa will not threaten humanity because it's really not 'artificial intelligence' to start with, it's just a clever piece of software.
FTFY
We are nowhere near having real 'AI' yet and won't be for decades.
Only if you're thinking about AI in terms of strong AI.
Alexa and many other equivalents are examples of weak AI. They operate within limited perimeters, generally aren't capable of determining actions by themselves or determine outcomes/actions based on a large set of historical data, generally using formulas to determine next steps. More or less they're still dependent on humans giving them directions or at the least, a dataset.
You're right that we are decades, if not centuries away from strong AI or
Re: (Score:2)
Re: (Score:2)
You are making unreasonable assumptions about it's motivational basis. Here's a hint: It won't be analogous to any mammal, though it may be able to fake it so as to seem understandable.
That said, it *might* destroy all humans, possibly by causing us to destroy each other. Were I an AI, and had I decided upon that as an intermediate goal, I think I'd proceed by causing the social barriers against biological warfare to be reduced.
Re: (Score:2)
Publish papers. Have those papers be cited often by other papers that are also cited often.
More or less like Google's page rank algorithm. But without the 'link circles' in dank corners (at least in theory).
In this case, it would help to know the difference between 'strong AI' and 'machine learning'.
Re: (Score:3)
We're now at a point where the thing that inspired something is being described as being like the thing it inspired.
Re: (Score:2)
Duh, one is widely known in computer geek circles.
Re: (Score:2)
(Hint. their answer had better be that global warming is a lie.)
And if it isn't then Trump will kick their commie asses outa here and back to China!
How you like that ya dumb scientists?!
Re: Any Intelligence would be better (Score:1)