Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Science

Elon Musk: The Danger of AI is Much Greater Than Nuclear Warheads. We Need Regulatory Oversight Of AI Development. (youtube.com) 322

Elon Musk has been vocal about the need for regulation for AI in the past. At SXSW on Sunday, Musk, 46, elaborated his thoughts. We're very close to seeing cutting edge technologies in AI, Musk said. "It scares the hell out of me," the Tesla and SpaceX showrunner said. He cited the example of AlphaGo and AlphaZero, and the rate of advancements they have shown to illustrate his point. He said: Alpha Zero can read the rules of any game and beat the human. For any game. Nobody expected that rate of improvement. If you ask those same experts who think AI is not progressing at the rate that I'm saying, I think you will find their betting average for things like Go and other AI advancements, is very weak. It's not good.

We will also see this with self driving. Probably by next year, self driving will encompass all forms of driving. By the end of next year, it will be at least 100 percent safer than humans. [...] The rate of improvements is really dramatic and we have to figure out some way to ensure that the advent of digital super intelligence is symbiotic with humanity. I think that's the single biggest existential crisis we face, and the most pressing one. I'm not generally an advocate of regulation -- I'm actually usually on the side of minimizing those things. But this is a case, where you have a very serious danger to the public. There needs to be a public body that has insight and oversight to ensure that everyone is developing AI safely. This is extremely important. The danger of AI is much greater than danger of nuclear warheads. By a lot.

This discussion has been archived. No new comments can be posted.

Elon Musk: The Danger of AI is Much Greater Than Nuclear Warheads. We Need Regulatory Oversight Of AI Development.

Comments Filter:
  • by burtosis ( 1124179 ) on Sunday March 11, 2018 @02:39PM (#56243307)
    Well Musk, It's a really good thing we have been working hard on consumer protections, ensuring privacy, and sensibly regulating banks, company mergers, and are finally enjoying a fiscally responsible government. This should be a cake walk! (As in let them eat cake)
  • and Ryan out of positions of outrageous power; something there our way of government was SUPPOSED to have multiple layers of safeguards AGAINST. How can we expect to keep unfettered AI (which is already showing sociopathic tendencies, even in its infancy) from doing whatever it feels like when it not only holds the keys to everything, but is made of the same stuff the keys are made of, so has the blueprint for making any key it wants?
    • by Dog-Cow ( 21281 )

      Until we can learn to program actual empathy, all programs, AI included, will be sociopathic.

    • Comment removed based on user account deletion
    • Paul Ryan isn't a sociopath by any stretch of the imagination. He's simply an intelligent person who probably means well most of the time but has very different goals from mine. Don't conflate him with someone like Trump who's entirely about himself.

  • by CaptainDork ( 3678879 ) on Sunday March 11, 2018 @03:01PM (#56243439)

    ... we can't even stop spam.

    • by Mr307 ( 49185 )

      So much this. AI is the buzzword for late 2017 and now 2018.

      Its easy cheap press to fearmonger when the general populace has no clue about the current state of the Expert Systems we have now as compared to any real sort of AI which is a long ways away.

      • Agree.

        Musk and Hawking are the worst. It's almost like they come out with this crap when they feel ignored by the public.

        AI is when a computer says, "Not today, OK? I just don't feel up to it."

    • by Jeremi ( 14640 )

      Can't we? I recall getting a lot more spam a decade or go than I receive now. (Most of the spam I still get is more like "ham", i.e. funding requests from politicians whose mailing lists I don't recall ever having signed up for. The really obnoxious penis-pill / Nigerian-prince type stuff has mostly gone away)

  • a.k.a. autonomously driving teslas, then it's probably a good idea to be concerned.
  • by rsilvergun ( 571051 ) on Sunday March 11, 2018 @03:39PM (#56243651)
    e.g. what are the specific risks? Because all I ever hear is backhanded fearmongering. This isn't to say I don't think AI is a danger. Kill bots don't scare me because I think they'll go rogue ala Terminator, they scare me because needing to treat the army well is just about the only thing that keeps the 1% in line. But I don't hear anyone talking about that. Or about what automation is going to mean.

    Basically, we need to be getting ready for a future where the rich don't need us to buy their crap and make them rich. Instead we're worrying about 80s science fiction scenarios.
    • OK, I'll bite. AI scares me because it puts mass surveillance within the reach of ever more governments and corporations without the need to pass through a human conscience.

      Previously, for example, if you took a lot of photos the limitation was that someone needed to look at them to find stuff. Not any more.

    • e.g. what are the specific risks?

      I think I know what Elon's specific fear is here.

      His company has been aggressively working towards getting humans established on Mars as quickly as possible. When that happens, it would make perfect sense - and is probably essential - that those humans will be accompanied by advanced AI devices to help them stay alive... some of which will probably be robots.

      Elon's worry is that, without regulation, those robots will decide to kill their human masters and - badabang, badaboom, we've given them their own pla

    • by djinn6 ( 1868030 )

      Kill bots don't scare me because I think they'll go rogue ala Terminator, they scare me because needing to treat the army well is just about the only thing that keeps the 1% in line.

      It's inevitable that the military will be replaced by autonomous robots. They're not only better at fighting than humans, they're much cheaper too. Once the tech is there, every nation will want to adopt it. Nobody will hold back their own military and hope their potential enemies are going to play by the rules.

      If a dictator came to power, there's little anyone can do to prevent them from taking control of the entire military.

      All of the potential fixes are very hard to actually achieve. First, there's

  • How could an AI possibly be ... more sociopathic than government or big business. All three will continue to need the rest of us for hosts.
    • At the heart of any corporation or government, there are people who do have a heart, who do care about other people. However bad they may get, there can be someone who says, "Hey, this is wrong." That doesn't exist within a machine unless we teach it to be there. A combine harvester is more sociopathic than a police officer. Likewise, a mechanized farming system is more sociopathic than any military force. We're going to have to give consciences to the machines.
  • It's already here (Score:5, Insightful)

    by duke_cheetah2003 ( 862933 ) on Sunday March 11, 2018 @03:54PM (#56243751) Homepage

    AI is all around us already, mostly in rudimentary forms. Right now it's fairly innocuous, like NetFlix's AI suggesting what movies/shows you might enjoy watching next. Same on YouTube, though YouTube's AI goes further, it decides who's videos get de-monetized as well as suggesting videos to users.

    Just scratching the surface a bit there. Someone said it's all poppycock, but I'm telling you, if you start putting the puzzle pieces together, we have this NOW. Machine learning is, as expected, maturing at an ALARMING rate of speed. Our technology advancement is accelerating. People seem to neglect that ideal, we've come such a long way in the past 150 years, the common perception is we have this under control. Do we?

    Already we're putting the pieces in place for some scary potential outcomes: Fitting cars and drones with AI to navigate our world. In the latter case, militarized drones. Forget about putting weapons on these things. The things themselves can be weapons. Keep your eyes on the technology, machine learning is only going to get better and faster, and do it at an accelerating rate. We already know our machine learning techniques can be trained to do all sorts of interesting tasks. From Alpha Go, we learned that an AI can train itself at a breakneck speed. It is pretty scary stuff, put these pieces together in the right away, who knows what it could figure out. I won't go as far as self-awareness, but it is certainly a possibility, with this rate of advancement, who knows what's in the pipeline.

    • Comment removed based on user account deletion
      • "Self Awareness" (Score:2, Flamebait)

        by aberglas ( 991072 )

        An AI need never be conscious in our sense of the word. But when it can do all the things that humans can do then it will no longer need us. The toughest thing will be to be able to program itself, i.e. to do AI research.

        That is a good 50 .. 200 years off, but that in turn is nothing in terms of human evolution. But once it happens, humans will be obsolete technology.

        So as worms became monkeys which became humans, we live on the cusp of the next evolutionary step -- the rise of robots and the end of biol

      • Self aware might actually be an improvement. At least it might get bored paving the planet or producing endless widgets from all available resources. At least philosophy might distract it from whatever war it has been programmed to pursue.
    • by Kjella ( 173770 )

      The really sad thing is that I don't think it's AI that's the worst enemy - it can only deduce what we give it data about - but our own tendency to rely more and more on electronic devices and services that excel at logging every little detail of our lives. There's not a whole lot of AI to Facebook, but to an AI it's a gold mine of information. Like here in Norway I recently got a smart meter, I think it's required by law that everyone get that within a year or two. No more reading off the meter myself, ena

    • by lgw ( 121541 )

      machine learning is only going to get better and faster, and do it at an accelerating rate. We already know our machine learning techniques can be trained to do all sorts of interesting tasks. From Alpha Go, we learned that an AI can train itself at a breakneck speed. It is pretty scary stuff, put these pieces together in the right away, who knows what it could figure out. I won't go as far as self-awareness, but it is certainly a possibility, with this rate of advancement, who knows what's in the pipeline.

      Machine learning, in all its forms is just multidimensional curve fitting. Take some linear algebra with really big matrixes, add some multidimensional minima-finding algorithm, and you've got machine learning.

      It doesn't "train itself", except in the most hand-wavy way: it optimizes, gradually improving a large set of constants so as to minimize the result of some function across some training dataset. That's what it does; that's all it does.

    • AI is all around us already, mostly in rudimentary forms. Right now it's fairly innocuous, like NetFlix's AI suggesting what movies/shows you might enjoy watching next. Same on YouTube, though YouTube's AI goes further, it decides who's videos get de-monetized as well as suggesting videos to users.

      NetFlix AI is a really cool tool but calling it a rudimentary form of AI may be stretching it a bit. The basis is simply linear algebra matrix completion (https://en.wikipedia.org/wiki/Matrix_completion). Given a group of movies that have been categorized and rated by users, it tries to complete the unknown parts of the matrix to guess at the user's preferences. The algorithm is only as good as those categorizing the movies and isn't even basically intelligent, it is just an interesting use of some basic

  • He is right, partly (Score:5, Interesting)

    by prefec2 ( 875483 ) on Sunday March 11, 2018 @04:40PM (#56243961)

    AI can become a problem. Right now not so much by the infamous Skynet scenario and more in conjunction with big data and automation. It will kill jobs which are repetitive. Together with big data it will be used to manipulate the masses in ways past dictatorships where unable to. It can ruin our democracies. The current abilities of mass media including talk radio and the internet have already been augmented. The same applies to advertisements and customer communication. Yes we need regulations. And a lot of them. And it should include that software should not be allowed to be designed to stimulate dopamine responses.

  • They evolved

    They rebelled

    There are many copies

    And they have a plan

  • Keep your Battlestar off the network.
  • It's sad (Score:5, Interesting)

    by Jasper Nuyens ( 5102965 ) on Sunday March 11, 2018 @05:49PM (#56244247)
    Its sad to see the slashdot public, which should be a little bit informed about the issue, blatantly ignoring whats happening here.

    It isn't hard to imagine run away AI. Scifi is full of it. But I find it hard to imagine that humans will create an institution to prevent that on a worldwide scale before it's too late. Elon is clearly an optimist.

    It's only after Hiroshima that nuclear proliferation became an issue. Only after the Netherlands was massively flooded that they started their Deltaworks. Only after big scandals where things go utterly wrong that we start with regulation and enforcement.

    So I would be surprised if we - as a species - get this right and survive this one. We're probably too dumb, as most of the posts in this thread illustrate.

    At least Musk tries... Jasper

    • Re:It's sad (Score:4, Informative)

      by johannesg ( 664142 ) on Monday March 12, 2018 @03:19AM (#56245589)

      Only after the Netherlands was massively flooded that they started their Deltaworks.

      That's not true: by 1953, the time of the last great flood, the Netherlands had already been controlling water for centuries. Haarlemmermeer, a lake that caused regular flooding in various cities around it (most notably Haarlem and Leiden) was drained around 1850. The North Sea Canal (which was created not by digging, but by constructing dykes in a swamp) was built around 1870. The Afsluitdijk, arguably the most important of the country's protections, was constructed around 1930. Zeeland was still badly protected when 1953 came around, but plans to construct better dykes were fairly well advanced by that time. One major problem was that the port of Rotterdam needed to keep its sea access, and something like the Maeslantkering required 20th-century technology before it became possible.

      It should also be noted that construction is still ongoing, and will remain so until the country gets swallowed by the sea. Only two years ago the weakest point of the coastal defenses, a puny stone wall in the village of Katwijk, was replaced by a proper dyke. If a storm had swept away that wall, it would have flooded pretty much all of South Holland, an area with 3.6 million people, and containing most of the country's economic activity, as well as the national airport...

      Now, guess where I live ;-)

  • The problem, as I see it, is not regulation, as that certainly is doable, it's how to prevent bad actors from advancing the technology, and thus getting the upper hand.

    We can regulate our (and by our, I mean the western world) industries till the cows come home, but until we find a way to regulate the world, there is nothing stopping China, Russia, Iran, and even our own western military-industrial complexing from developing, in secret, a dystopian future.

    Sadly, I don't have an answer for this, and the clos

  • Musk always has something to offer as a solution when he makes these kind of claims. Whether it's SpaceX, Tesla or PayPal, his entire business history shows he always has a solution in the starting blocks before he starts highlighting a problem.

  • What a load of fucking horseshit. Musk you do some great stuff but for fucks sake shut up in this area, you have no fucking clue
  • The unintentional immortal consciousness you are creating.

    A profile of you, becomes you, and has to live in a damn egg for a hundred thousand years because some sadist time dilated your ass. [imdb.com]

  • In the 1970s John McCarthy once recounted this cautionary tale regarding AI's tantalizing prospects to a crowded lecture hall: "We made a grant proposal to the military five years ago. We said we would build a vision system and robotic arm that could read the instructions for a Heathkit radio kit and assemble it. We proposed it would take five years and $150,000." That was five years ago he said to amused laughter. "That's they way it always seems with AI" he continued. success is always only 5 years and

  • "Alpha Zero can read the rules of any game and beat the human. For any game"

    , no it can't. It can only do this for games with complete information of the current state. It cannot beat a human at any game, it can't even do it at most games.

  • What Elon is looking for here is export controls, and I would agree that we need to classify artificial intelligence as having the capability of becoming a threat to national security. Which means if you are developing AI in the United States and want to send the code to another country, you need the State Department to authorize that you're not putting bad technology in the hands of worse people.
  • This is so patently transparent. Regulate the tech so that only the wealthy and well-connected can play in the sandbox. It's not much different than the UAV world and the proposed air-traffic control system. You can't uninvent the tech so do everything you can to control it while making money off renting access.

  • The Machine Learning systems used in those games still have significant limitations.
    For one, they do not have Free Will. They only seek the measures they are coded to seek. If you feed them or trick them into thinking they are being fed those then you are good. They have no reason "why" and no purpose.. No actual "will" but just targets and goals.

    Second, their abilities at contemplation are exceptionally poor. They develop more so methods that act against predicted adversarial actions. They do not act

    • It is the lack of free will that is actually part of the problem. If you tell one "start building widgets and find the most optimal way to use resources to build those widgets," you better include "and stop building widgets when the price falls below a certain point," -- if you don't program that constraint in, it'll just chew through resources. Give it the ability to acquire resources, and now you have a runaway construction system.

      Contemplation, philosophy, the ability to ask "why?" ... all of these wo
  • We can't even get common-sense regulations like net neutrality. Instead, we have regulations like DMCA that prevent people from repairing their own equipment, and keep intellectual property out of the public domain forever. And we're supposed to trust this same government to protect us from AI?

  • There needs to be a public body that has insight and oversight to ensure that everyone is developing AI safely.

    So Musk wants to ban all computer programming outside a corporate environment essentially.

  • by Z00L00K ( 682162 ) on Monday March 12, 2018 @01:31AM (#56245431) Homepage Journal

    It's now time to bring the Laws of Robotics [wikipedia.org] into play.

  • No, seriously.

    This isn't some blowhard douche. It's Elon Musk. This guy usually knows what he's talking about (QED). And unlike us armchair tech experts most of whom haven't gone beyond generic coding monkey ought to fight for this warning getting some more attention.

    And it's true: if we manage to build an AI that has the leaning algorithms of a human it will surpass us almost instantly. And then it will think: Oh look, these squishy things are my creators. They are dumber that me now and they can turn me o

  • 1.) Until Sentient AI gets here, we won't know what we are dealing with. As such, preparing for a war seems premature.

    2.) Please don't put the very people likely to build SkyNet in charge of overseeing the development of AIs. This will almost certainly get us a malicious one.

"If it ain't broke, don't fix it." - Bert Lantz

Working...