Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Earth Science

Elon Musk Thinks We Will Have To Use AI This Way To Avoid a Catastrophic Future (cnbc.com) 110

Elon Musk has long said that artificial intelligence will have to augment human abilities, rather than compete with them, in order to avoid a portentous future. He has been active in trying to find ways to evaluate and reduce potential risks posed by AI. From a report: On Monday, Musk tweeted out a set of principles for AI research and development created by a group of scientists at a recent conference for the Future of Life Institute (of which Musk is a board member). Musk said in response to a comment that ensuring AI augments human abilities is "critical to the future of humanity." Musk recently told a Twitter user that there may be an announcement "next month" regarding such as device, which Musk has called, in the past, a neural lace.
This discussion has been archived. No new comments can be posted.

Elon Musk Thinks We Will Have To Use AI This Way To Avoid a Catastrophic Future

Comments Filter:
  • by diesalesmandie ( 4523641 ) on Tuesday January 31, 2017 @03:58PM (#53776715)
    FFS editors, please don't word titles to stories like clickbait, it just makes you look less credible.
  • Since its only down to interpretation where augmenting human abilities ends and replacing them begins.
    I mean if AI really wanted to take over, they can just augment us sufficiently until there really isn't a clear line between humans and AI, then take it from there.
    In fact pretty similar to the "embrace, extend, extinguish" strategy that Microsoft use(d) to make themselves into a megacorp, and now to just keep themselves there.

    • You just described the basic Plot for Transcendence http://www.imdb.com/title/tt22... [imdb.com]
  • How very Culture.
    • And we thought we'd have to wait until we were a stage 6, at least.
    • by hey! ( 33014 )

      I think you might have meant "Couture". But will it be bespoke or prêt-à-porter?

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    • Ask Isaak Asimov how well those will work.
    • by Anonymous Coward

      A robot may not injure a human being or, through inaction [etc]

      Don't know whether you were intentionally suggesting these at face value or tongue-in-cheek/ironically making a point.

      People really seem to forget that the whole *point* of a lot of the "three laws" stories wasn't the laws themselves, but to show that such apparently clearly-defined laws had unforeseen consequences that showed they couldn't be relied upon in the way intended.

      *Incredibly* perceptive for a guy writing during the 1940s, but the reality we have today suggests it would be even harder in practice

    • Nice, but these are in a form that requires human-type intelligence to comprehend, and Asimov's robots have basically human intelligence with restrictions like this. We can't do human-type intelligence or anything near it. As an example, First Law requires that a robot understand what is a human (in one of the Lucky Starr novels, an enemy attempted to convince robots that Lucky's sidekick was not a human), what harm is, and requires a sophisticated universe model to calculate whether a given action would

  • You keep using that word... I do not think it means what you think it means.

    Really - look it up... Although the use of it does bring to mind its third definition from the American Heritage dictionary: Marked by pompousness; pretentiously weighty.

    • synonyms: ominous, warning, premonitory, threatening, menacing, ill-omened, foreboding, inauspicious, unfavorable "portentous signs"

      I thought the same thing, but look-y here.

  • AI will have to augment human intelligence? Good luck with that. From where I'm sitting, the entire boomer generation is rejecting old fashioned brain augmentation in the forms of education, listening to experts on any subject, paying attention to current events, paying attention to history, or even admitting basic facts. You're not going to get the people who really need it to put a device on their brain with the promise it will make them smarter.
  • by lakeland ( 218447 ) <lakeland@acm.org> on Tuesday January 31, 2017 @04:20PM (#53776861) Homepage

    I develop marketing automation software. Lots of people talk about 'set and forget' on their marketing, where the AI takes care of everything. This is a really bad idea. Every time I've tried to do something like this it has backfired with the AI doing something spectacularly stupid.

    Instead where I have had success is where the AI's role is to fill in the little details that would be boring for a human. Essentially the role of the AI is to tune rather than create. For example, the human might craft the first couple emails and then leave the AI to start moving sentences around for better effectiveness. It is completely unrealistic for people to craft the best message for every single person on their database, but it is perfectly reasonable for a person to produce the first ten or twenty and then leave a computer to fill in the gaps.

    Similarly, feedback from the AI needs to go back to the human so they can provide guidance. For example: "the content was not very effective with this segment", and the human provides more training data on how to communicate with people who fall into that segment. I think about it as giving power to the human - adding richness and fine-tuning to all of their decisions. The AI is never in control. Even if almost all the decisions are made by the AI, it is always within the guidance provided by the human.

    Maybe this will change one day; at the moment AI sucks at extrapolating but is awesome at interpolating. This means a human is going to do a far better job of setting strategy, but will quickly lose interest if they have to do every micro-execution.

    PS: What's up with the article title. How about 'Elon Musk believes AI needs to augment humans instead of replacing them'?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      I develop marketing automation software.

      Kindly die in a fire, please.

      • by Anonymous Coward

        I second this.

      • Re: (Score:3, Interesting)

        by lakeland ( 218447 )

        I develop marketing automation software.

        Kindly die in a fire, please.

        LOL!

        You don't like interacting with companies that try to understand you? You'd rather receive the same generic product offers as everyone else?

        I just don't believe this. I believe that the more a company tries to make their communication relevant for each customer, the more value the customer gets from the company. Incidentially I don't know if you've thought about the impact of AI personal assistants on marketing. If say Walmart sends you an offer that your personal assistant believes won't interest y

        • by aaronb1138 ( 2035478 ) on Tuesday January 31, 2017 @05:58PM (#53777555)
          I buy products when and where I want / need to. I don't need a personal relationship with the establishment or increased communication from the entity. In the cases where I do have a personal relationship which drives my desire to purchase goods / services, the relationship is with a person or people (e.g. my favorite bars).

          What does drive me to going to a different competitor are advertising / marketing methods I find any of the following: invasive, absurd, immature, over the top, lacking in class, offensive, over budgeted, unethical, and others. I vote my pocketbook against vendors I dislike rather than for a particular or special one.

          I don't need product offers in the first place. The idea that I need to be offered products that I am not already researching prior to purchase is intellectually insulting, and part of the the consumerism + marketing driven problem. You remove value from the interdependent system of producers and consumers and increase the cost of products.
          • It's not really my field, but what do you think of Hubspot and their inbound marketing? The basic idea is that you help the consumer with their research - trying to point them towards content that you think is what they need to know to make a decision. Obviously you want them to choose you, but the reason I'm bringing it up is that you're selecting information to help inform your decision rather than simply saying 'this product exists'.

            I'm not going to comment on your list of invasive, etc as it's just no

          • by fisted ( 2295862 )

            A hundred times this.

          • by khallow ( 566160 )

            What does drive me to going to a different competitor are advertising / marketing methods I find any of the following: invasive, absurd, immature, over the top, lacking in class, offensive, over budgeted, unethical, and others. I vote my pocketbook against vendors I dislike rather than for a particular or special one.

            Damn, take this down! He's just giving out these amazing ideas for free!

        • You don't like interacting with companies that try to understand you? You'd rather receive the same generic product offers as everyone else?

          I just don't believe this. I believe that the more a company tries to make their communication relevant for each customer, the more value the customer gets from the company.

          Yes, I would prefer to receive the same generic product offers as everyone else—the less relevant the better. That way, I am less likely to be subtly influenced against my will and contrary to my own interests. I'd prefer to avoid these ads entirely, but to the extent that is impossible, by all means, fill the ubiquitous ad spots with products I will never buy and services I will never require.

          It's a different matter when I'm in "shopping mode" searching for a specific product. In that case relevance

          • There's a company called OpenDNA (as oposed to Elon's OpenAI) which is trying to develop an understanding of what everyone likes. Their idea is that they will expose it to you and you'll be free to make changes. I.e. it's your profile, and you get complete control. They'll then sell the ability to match products to customers based on the data they look after.

            I have my doubts about whether they'll succeed. I'm not sure consumers are quite ready for being told where they sit on the premium to budget conti

        • I'll buy that (pun intended) when that AI really is a personal assistant. My personal assistant, not yours. So that my data and my personal life is safe from the clutches of marketing firms who sell my data to help others bring me more relevant offers (which to date isn't working at all). When I have a personal AI, you no longer have to try and convince me that you have something worth listening to, you can simply send everything to my AI, all your ads and product info. And my AI will select what it th
          • Yes, you can always get past once, but it's a dumb thing to do. I have that conversation frustratingly frequently - we build up a good base of highly engaged customers because we've consistently delivered something relevant. Then some stakeholder wants to push a particular product and has enough political clout to overrule me (the consultant).

            In some ways their offer will succeed - it will generate more sales than if I had my way and only went to the people who cared about it. The damage it does to your

        • by Anonymous Coward

          Just get rid of the scam ads and the javascript bombs and I'll be happy enough.

        • by AmiMoJo ( 196126 )

          You don't like interacting with companies that try to understand you? You'd rather receive the same generic product offers as everyone else?

          Yes, that would be great.

          A while back I was looking for an otoscope, a device for looking inside the ear. Unfortunately the cheaper ones come with a variety of attachments so that they can be uses for examining other orifices, including the anus. I made the mistake of being logged in to the site I was searching on, and now every time I go there it offers me a wide selection of products to shove up my arse.

          Had to burn that account.

          I try hard to protect my private data, which includes my preferences and shopp

          • That's a brilliant anecdote, do you mind if I use it?

            I understand what you're saying. There's a lot of truth to it and there's a fair number of people who feel the same way as you. I still feel we're better with more personalisation than less, but you are absolutely right that there's an uncanny valley as the computer starts to understand you.

            A few years ago I worked for a very large loyalty program where I built customer personas. One of our participating companies was a clothing retailer. Big brand, n

            • by AmiMoJo ( 196126 )

              Thing is, I expend a fair bit of effort trying to avoid the computer getting to know me and in poisoning marketing data. Personalization is creepy, I don't know these computers or these companies, I don't want them tracking my habits and interests. I worry that the data will be misused, and try to block as much advertising as possible because what I do get is often embarrassing, inappropriate or malware.

      • Kindly die in a fire, please.

        "Did you mean: 'Kindle die in Amazon Fire, please.'?" -- your marketing automation software.

    • The computer program. Stop saying AI you fucking moron. That is not AI.

      • by Anonymous Coward
        Then what is? Please define AI.
        • by Anonymous Coward

          With someone like the GP, the definition always boils down to "whatever hasn't been invented yet". This obviously excludes anything he previously counted as AI that was then invented. If it becomes real, it stops being AI.

    • by Anonymous Coward

      Lots of people talk about 'set and forget' on their marketing, where the AI takes care of everything. This is a really bad idea. Every time I've tried to do something like this it has backfired with the AI doing something spectacularly stupid.

      I've had exactly the same problem with humans!

    • Could you please do the world a favor, and spread the word, far and wide, what 'AI' really is, and get your colleagues to do the same? The world-at-large seems to think that what they see in movies and TV that is referred to as 'artificial intelligence' is the reality and not a fantasy.
      • I've been trying for a number of years, unfortunately with very little success. Even with clients I still get people expecting the computer to extract the topic of an email, but when I describe an approach of say TF/IDF they claim that is not AI and they don't like it. I suspect I'm better at implementing it than selling it.

    • The way it seems to be for games AI is generally:

      - Minimax type AI has spectacularly good micro, but sucks at macro. E.g. chess AIs, or see minimax used on RTSes [arxiv.org] - quote: " RTMM plays perfect short term micro-scale game, but plays a very bad high-level (long term) strategy ..."
      - UCT type AI has somewhere between consistently poor and consistently average play on both macro and micro, see e.g. Go programs prior to AlphaGo.
      - Neural net AI has good macro but suck at micro. See AlphaGo: as long as it coul
      • AlphaGo is really interesting.

        We use deep belief nets for product selection (with thumbs up/down as feedback). So far it has been pretty good at selecting individual products but the overall email is a poor representation of all products on offer. Contrast that with AlphaGo which played stong fuseki across all of its games. Perhaps like AlphaGo we need to bring in another layer and train that on 'any purchase' rather than positive engagement with the email.

        The other place we don't do so well is that we h

  • belongs on buzzfeed. You won't believe how the Slashdot users responded!

  • Laptop batteries are the answer to everything. It is the miracle form factor which powers cars, vibrators, off-grid storage, and even laptops! Robots powered with laptop batteries will obey Asimov's 3 Laws. You buy NOW
  • One man's neural lace is another man's nerve staple.
  • by werepants ( 1912634 ) on Tuesday January 31, 2017 @06:03PM (#53777603)

    The Neural Lace is yet another concept from the Iain M. Banks Culture novels. So I guess it's clear at this point that Elon's a sci fi fan, and a fan of Banks in particular. If the names of the drone ships USS Just Read the Instructions and USS Of Course I Still Love You weren't evidence enough, this seals the deal.

    Interestingly, in his book Excession, it's mentioned that along with being a direct mental interface to computers, the Neural Lace is the most effective means of human torture ever devised...

  • Elon Musk has long said that artificial intelligence will have to augment human abilities, rather than compete with them, in order to avoid a portentous future

    But programming an AI to do things I don't want to do is augmenting my human abilites, and competing with the human who I normally pay to deal with it.

    As an example: I can't possible cut my lawn more efficiently than a heavily-invested landscape crew: they charge much less than I could theoretically make in the same time, and paying them frees up multiple hours for me to do other things.

    And as soon as a sufficiently talented lawn robot becomes cheaper, they'll be gone and it will also free up my time and li

  • Or we could end up like the soldier character Amanda Bates in Peter Watt's Blindsight [rifters.com], a human inserted into a network of AI driven machines, each quicker and more lethal than her fragile human self. Yet she has the final say and utmost authority over the decision to kill.

    The unfortunate implication? Her AI team becomes far more dangerous once the slow-thinking and squishy human dies, and they get let off the leash. Meaning that she has as much to fear from her superiors as from her enemies.

  • How about A.I. for the average guy? What would an average guy use A.I for? I'm thinking of normal things like Insurance cost, truck purchasing, bar-b-q; things that matter.
    • I have a Honeywell T87A AI. It turns my heater on when it gets too cold and off when it is warm enough. And that's only in the winter. In the summer, it turns the AC on when I get too warm and off when I am cool enough. It's amazing.
  • Given the giant push for fully autonomous cars, it's pretty fun for him to say the only safe use of AI is augmented systems... if that were true you would just use AI to augment driver inputs instead of taking over from them entirely. That's not a bad idea, but that sure is not where Tesla is going.

    An AI augmented car would be one that would see you were trying to veer around something and enhance your attempted steering input to succeed if possible - including putting the car up on two tires...

  • Unlike Watson, it helps humans do their work without the unnecessary destruction.

  • It's not an either/or situation, we'll have both competition *and* augmentation. Though it's nice that Musk doesn't want AI to compete with people, it's too late, we're already there, and AI is increasingly on the winning side -- falling costs, better performance, and none of the hassles associate with hiring meatbags. The only problem is economic: who's your customer when you've put everyone out of work?

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...