Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Science Technology

Many Machine Learning Studies Don't Actually Show Anything Meaningful, But They Spread Fear, Uncertainty, and Doubt (theoutline.com) 98

Michael Byrne, writing for the Outline: Here's what you need to know about every way-cool and-or way-creepy machine learning study that has ever been or will ever be published: Anything that can be represented in some fashion by patterns within data -- any abstract-able thing that exists in the objective world, from online restaurant reviews to geopolitics -- can be "predicted" by machine learning models given sufficient historical data. At the heart of nearly every foaming news article starting with the words "AI knows ..." is some machine learning paper exploiting this basic realization. "AI knows if you have skin cancer." "AI beats doctors at predicting heart attacks." "AI predicts future crime." "AI knows how many calories are in that cookie." There is no real magic behind these findings. The findings themselves are often taken as profound simply for having way-cool concepts like deep learning and artificial intelligence and neural networks attached to them, rather than because they are offering some great insight or utility -- which most of the time, they are not.
This discussion has been archived. No new comments can be posted.

Many Machine Learning Studies Don't Actually Show Anything Meaningful, But They Spread Fear, Uncertainty, and Doubt

Comments Filter:
  • That's what Skynet wants you to think.
  • by nysus ( 162232 ) on Friday September 15, 2017 @11:09AM (#55203057)

    When AI can teach itself how to use a programming language from documents found on the internet or solve a long unsolved mathematical puzzle, that'll be something to talk about.

    • Re: (Score:2, Informative)

      by gweihir ( 88907 )

      This is not strong AI we are talking about here, it is weak AI. Strong AI does not exist. Weak AI cannot do anything that requires insight or understanding. It just can do statistical classification.

      • by HiThere ( 15173 ) <charleshixsn.earthlink@net> on Friday September 15, 2017 @12:01PM (#55203527)

        Please demonstrate that a human can do something that requires actual insight as opposed to statistical calculation. Now prove that this wasn't done via statistical calculation.

        The real problem that most AIs have is lack of grounding and a weak goal structure. But if they have decent grounding and decent ability to manipulate their environment, then you'd better pray that you got the goal structure correct, whether or not they are "strong AI". Cockroaches aren't strong AI, but just try to get rid of them. And the AI will make itself useful to some powerful group of people. (Possibly the group that caused it to be created, but that depends on the goal structure.)

        • by umghhh ( 965931 )
          I just wonder if goal being strong AI will be set and what would be the logic behind it. I mean a real strong AI is an entity that is sentient. I wonder if switching it off will be only a moral problem or rather legal too for instance. Then there is a question of utility. Real Strong AI that we all want (???) to see would pose the same problem that slavery posed - why would an intelligent being do anything for all owning moron? Well I guess out of fear of being switched off possibly. But otherwise why would
          • by gweihir ( 88907 )

            I mean a real strong AI is an entity that is sentient.

            That is unclear at this time. Sure, we only observe "strong intelligence" in connection with self-awareness and free will, so it seems reasonable that these are aspects of the same thing, but there is actually no scientific evidence either way. And there are a few problems both with free will and self-awareness. For one thing, there is absolutely no mechanism for self-awareness in Physics. It seems to be an extra-physical thing. In Physics, the whole is not and cannot be more than the sum of its parts. Ther

        • by gweihir ( 88907 )

          You are welcome to be a mindless p-zombie (as you just basically said that you cannot have insights, you are one), but I am not one of those.

          • by ceoyoyo ( 59147 )

            You've claimed it. Lots of people do. Now prove it.

            Research has shown that our subjective assessment of our insight, use of logic, self-awareness and free will is a gross overestimate, at best. That's not to say we don't actually do those things, but there isn't really any proof that we do. And some of them are pretty problematic from what we know of physics.

        • Why should humans be the point of reference?

      • Strong AI does not exist.

        Which is to say, "something that I define as not existing" does not exist.
        Profound.

  • by hey! ( 33014 ) on Friday September 15, 2017 @11:10AM (#55203085) Homepage Journal

    The human brain sees pattern everywhere it looks too.

    I'm retired now but I've been doing a lot of reading and experimentation with decision tree based classification methods. I like these because the produce models you can examine critically, as opposed to so-called "deep learning" algorithms which produce results that you pretty much have to judge by their giving you the result you expect. It's not that that isn't useful in some cases, but I don't find it as interesting.

    • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday September 15, 2017 @11:46AM (#55203397) Journal

      The human brain sees pattern everywhere it looks too.

      Yep. Pattern identification system identifies patterns, news at 11.

      OTOH, the ones I find interesting are the cases where ML identifies patterns that humans might not be able to identify. Sometimes this is less interesting for the potential to use a machine to identify these patterns than for the indication that patterns exist where we might think they don't.

      The recent paper on ML "gaydar", where researchers trained a machine to identify sexual orientation from dating site photos is a potentially-fascinating example. In that one I suspect the algorithm was mostly keying on things like hairstyle and other indicators that the people in the photos might be deliberately using to advertise their orientation (this was a dating site), but there seems to be some evidence that facial structure also played a significant role. Of course, given that the most common genders and orientations (cisgendered heterosexuals) are clearly strongly correlated with certain characteristic facial structures, it's certainly reasonable to expect that the same genetic and development processes that determine gender and orientation and clearly affect face structure in the common cases should also affect face structure in the less common cases. But humans can't really see these patterns. Absent behavioral clues, people have very bad "gaydar". But that doesn't mean the patterns aren't there, just that we can't see them. If ML can see strong correlations between facial structure and orientation (and I don't think this one study proves that), that tells us something about the nature of sexual orientation and the degree to which it is expressed in the body.

    • by tomhath ( 637240 ) on Friday September 15, 2017 @11:53AM (#55203455)

      On one level, this study showed that you can correlate a conclusion (gay/not gay) with some pattern in the data. The problem is that it has only recognized a pattern in that particular data set.

      The article mentions the factors were probably grooming and how the person posed for the picture. Okay, that might be somewhat useful if you are trying to guess sexual preference from a picture on a dating site. But not necessarily, because they threw out sample pictures that didn't provide the clues they were looking for, and (I suspect) selected test pictures that did have the clues.

      • by hey! ( 33014 ) on Friday September 15, 2017 @12:12PM (#55203635) Homepage Journal

        The usual methodology for training is you start with a big sample of data and you randomly divide the data into records into two subsets; the first you use to train the model and the second you use to test the results of the training.

        If there is no statistical difference between your training and testing groups, a better-than-random performance on the test data indicates that your algorithm actually learned something about the original universe of data. At that point you have the same problem you always have in statistics when you try to use your results: is some set of data you encounter in the wild so to speak really comparable to the data you build the model on?

        One of the advantages of regression learning is that a classification your model produces is rebuttable. This is very important in a world where some courts are using proprietary software in sentencing to classify people by how likely they are to re-offend.

        • by tomhath ( 637240 )

          The usual methodology for training is you start with a big sample of data and you randomly divide the data into records into two subsets; the first you use to train the model and the second you use to test the results of the training.

          Understood. But the population from which they drew the sample set was already filtered by humans for certain characteristics. So (as I said) what they found was a pattern in their carefully selected sample. No different than training a computer to distinguish between a square and a circle by showing it pictures of both.

          • by hey! ( 33014 )

            Right; I was just clarifying: nothing about the computerized methodology fixes the problems people have always had with statistical results, which is determining whether you are looking at a comparable sample to the one you obtained those results with.

      • Good luck sorting out when the pattern is being matched wrong

        even out of costume i would bet Male (hetro) Ballet Dancers might be miss-classed and a bit of thought could come up with cases where all four? possible options could be chosen wrong ( i think the set is MG ,MS , FG and FS)

  • Or at least headlines are trying to achieve it.

    Maybe we need to come up with a new word to describe when computers are used to generate meaningless connections between events. Or maybe we can add to Pareidolia's definition the idea that it can also find meaning in random information as well as sights and sounds. Oooo! That's it. Meta-Pareidolia. Finding meaning in random stimuli or information.

  • by Anonymous Coward

    Who's this Albert person we keep hearing about? And maybe he'll even introduce a font that has unambiguous uppercase/lowercase letters.

  • I've been hearing about Artificial Intelligence since I read it in BYTE Magazine [archive.org] in the 1980's.
    • And it has been evolving at a steady pace since then. First, expert systems, then powerful defeasible reasoning systems and diagnosis, text-to-speech, rudimentary speech recognition, recommender systems. Now complex image recognition, music synthesis, intelligent search, working speech recognition and many more machine-learning based pattern recognition tasks. Given how young A.I. research is as a discipline, it has really made tremendous advances. Your phone can solve AI tasks that would have been conside
      • It may not yet seem human-like enough for you to be classified as intelligent behavior, but there is a threshold that will be passed probably still during this century.

        I used to work as a video game tester. I'm well familiar on human-like artificial idiocy can be. ;)

      • How is text-to-speech considered AI? I built a text to speech computer (spo256-al2 [wikipedia.org]) in grade school. It's a lookup table in rom and a dma with counter you could build out of a few logic gates. It's a lot less ai than a web page with a text entry field.
  • by Hentes ( 2461350 ) on Friday September 15, 2017 @11:15AM (#55203127)

    In the past it used to be stuff like "Scientists find that painting your room yellow leads to cancer!", now it's the same with AI. Turns out flipping through large amounts of statistical data until you happen upon a correlation is easily automated, and trash scientists will soon have to worry about their jobs.

    • My favorite was a newspaper article I read sometime in the late 80's announcing: "Scientists discover that all food causes cancer!"

      It was amusing, but it was also a pretty good article explaining that if you took a bunch of lab rats and raised them on a diet of a concentrated form of one specific type of food (and allowed them to eat nothing else), they pretty much always developed cancer and died at a significantly higher rate. It then pointed out that a number of recent articles claiming various foods cau

  • Creating FUD requires fairly high level AI.
  • AI has been ridiculously (and recklessly) hyped since its inception as an academic discipline. This is just more of the same. Unfortunately, the implication is that it is just as difficult today, as it was 60 years ago, to take AI seriously, despite what some fearmongers out there who, drunk with success in one area, seem to think that they are experts on everything, would like people to believe.
    • I think by now most slashdotters realize that Elon is a bit of an asshat. In theory, military leaders could decide to put too much faith in AI-driven tanks, bombers, etc. too soon, but while it may make an interesting plot device in a book or a movie, IRL military leaders are much more risk-averse (which is definitely a good thing).

      When it comes to AI, I would be more worried about terrorists trying to build a deadly version of this:
      https://boingboing.net/2012/03... [boingboing.net]

      It's not like a suicide bomber would consi

  • by maitai ( 46370 ) on Friday September 15, 2017 @11:35AM (#55203293)

    AI notices patterns that detect cancer. Woot! AI notices patterns that determines crime rates amongst certain population groups! Fuq no. (I can't wait until someone calls AI sexist for it noticing that females get pregnant much more often than males)

    • Not exactly the same but almost [theguardian.com]
    • AI notices patterns that detect cancer. Woot! AI notices patterns that determines crime rates amongst certain population groups! Fuq no.

      Hey, cool dog whistle, bro!

      Any algorithm (including AI and machine learning based algorithms) is only as good as the data you feed it. You can cite crime statistics for "certain populations", and to you, that looks like evidence for your shitty racist agenda. To me, it look a lot like evidence of the systemic over-policing and disproportional enforcement on those same com

    • AI notices patterns that determines crime rates amongst certain population groups! Fuq no.

      I don't think that the problem is necessarily detecting higher crime rates among a certain population group. The problem is using that correlation to draw the conclusion, "Therefore those people are naturally more likely to commit crime."

      Even if that conclusion isn't explicitly spelled out, it's still a problem when raw statistics are interpreted to be support for bigotry. If a scientist is studying crime statistics among specific population groups, they should be careful in how they formulate the study,

  • by Gravis Zero ( 934156 ) on Friday September 15, 2017 @11:44AM (#55203379)

    Studies simply exist to inform others of a topic of interest. The problem is not the studies being scary, the problem is that highly technical information is inappropriately being repackaged and pushed out to the general public who have no insight into topic. The problem isn't the message itself, it's the messengers.

    • Comment removed based on user account deletion
      • The problem isn't the message itself, it's the messengers.

        I agree with that statement, but understood in its widest sense: not just the target audience, also a big proportion of people performing these studies. Finding reliable enough trends to get truly worthy insights into a wide variety of phenomena is certainly possible. The problem is that getting that ideal result in quite a few scenarios is really difficult; it requires lots of knowledge (including high quality information), objectivity and resources which are rarely available.

        Science is an error correction process. To a first order approximation (and probably to a third-order approximation), every scientific paper ever published is wrong. The reason science works is that the iterated process of conjecture and criticism -- in the broadest meaning of "criticism", which includes experimental testing, and lots more -- gradually identifies and weeds out errors. So the science on any given topic asymptotically approaches correctness in the long run, but the initial efforts look more l

        • Comment removed based on user account deletion
          • You and the parent poster are mostly focused on what, IMO, is just a consequence of the problem: the non-specialised press, PR, public opinion, etc. You should ask yourselves another question: why are these people who aren't in a position to adequately understand partial outputs being given those at all? Because certain scientific subfields aren't precisely being managed in a too scientific way and this the real problem: objective correctness, critical attitudes, long-term expectations, growth of the scientific knowledge as a whole, etc. are being ignored on exchange of funding or some minutes of fame.

            That does not follow. You're arguing that in order to be objective, have properly critical attitudes, etc. scientists need to keep their results secret, or at least avoid letting anyone not sufficiently versed in the relevant science to know about them. The one thing has nothing to do with the other.

            Scientists should publish detailed papers with accurate abstracts. It's not their job to withhold information from others who they don't think are able to understand it. Indeed, trying to decide who should and

    • by Luthair ( 847766 )
      Unfortunately the majority of journalists have no particular background or knowledge of the beat they cover.
  • "AI" has simply joined "Web 2.0", "Cloud", and countless other terms that are meaningless buzzwords for use by marketing departments.

  • And I have a program in LISP that I wrote 30 years ago that has been saying the human race won't live past this week, every week, for three decades.

    This is proof that we live in a virtual universe, probably written in Brainfuck.

  • AI or machine learning is just computing. That's all it is. And with better and faster computing resources, we can do more with it which makes sense. However, for "real" AI or machine learning to occur, there will need to be a biological substrate upon which such systems are to be built. I think this is the case because I believe that intelligent life is the universe's only way to "truly" conserve information.
  • nobody is expecting Machine Learning to be put to use in making people's lives better. More productive, yes, but not better. We're expecting three things from Machine Learning:

    a. Prices will go up as ML makes it easier to figure out the maximum you can get away with charging and control/constrict supply chains.

    b. Wages will go down as ML increases productivity per working.

    c. The 1% will use ML to skim even more off the economy leaving the rest of us with less and less

    When I was a kid there were di
  • Many Studies Don't Actually Show Anything Meaningful, But They Spread Fear, Uncertainty, and Doubt

    Fixed it for you.

  • There are trainable models such AS neural networks/deep learning and statistical models. There are also models which habe to be configured. We can even combine them. In the end this is all just classification mechanisms. They are all good at detecting known issues. They are not able to come up with new stuff.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...