Many Machine Learning Studies Don't Actually Show Anything Meaningful, But They Spread Fear, Uncertainty, and Doubt (theoutline.com) 98
Michael Byrne, writing for the Outline: Here's what you need to know about every way-cool and-or way-creepy machine learning study that has ever been or will ever be published: Anything that can be represented in some fashion by patterns within data -- any abstract-able thing that exists in the objective world, from online restaurant reviews to geopolitics -- can be "predicted" by machine learning models given sufficient historical data. At the heart of nearly every foaming news article starting with the words "AI knows ..." is some machine learning paper exploiting this basic realization. "AI knows if you have skin cancer." "AI beats doctors at predicting heart attacks." "AI predicts future crime." "AI knows how many calories are in that cookie." There is no real magic behind these findings. The findings themselves are often taken as profound simply for having way-cool concepts like deep learning and artificial intelligence and neural networks attached to them, rather than because they are offering some great insight or utility -- which most of the time, they are not.
just like Carnac the Magnificent (Score:2)
Re: (Score:1)
Uh huh (Score:1)
Re: (Score:2)
Unless of course it's an experiment and not a shitty observational study.
Didn't we all assume that? (Score:4, Insightful)
When AI can teach itself how to use a programming language from documents found on the internet or solve a long unsolved mathematical puzzle, that'll be something to talk about.
Re: (Score:2, Informative)
This is not strong AI we are talking about here, it is weak AI. Strong AI does not exist. Weak AI cannot do anything that requires insight or understanding. It just can do statistical classification.
Re:Didn't we all assume that? (Score:5, Insightful)
Please demonstrate that a human can do something that requires actual insight as opposed to statistical calculation. Now prove that this wasn't done via statistical calculation.
The real problem that most AIs have is lack of grounding and a weak goal structure. But if they have decent grounding and decent ability to manipulate their environment, then you'd better pray that you got the goal structure correct, whether or not they are "strong AI". Cockroaches aren't strong AI, but just try to get rid of them. And the AI will make itself useful to some powerful group of people. (Possibly the group that caused it to be created, but that depends on the goal structure.)
Re: (Score:2)
Another comment here got me thinking:
We only fear death as a course of emotion.
Perhaps: We understand death by observing another human can "stop working". We feel pain. We feel happiness. We do not fear death so much as pain of death. If there is no fear of pain (machine can't feel pain, unless one begins removing unneeded parts of it and it senses that loss, or senses a threat to the loss of a function as pain) and no grasping for a "good feeling" of happiness (gaining more parts? more abilities to do thin
Re: (Score:1)
Re: (Score:2)
I mean a real strong AI is an entity that is sentient.
That is unclear at this time. Sure, we only observe "strong intelligence" in connection with self-awareness and free will, so it seems reasonable that these are aspects of the same thing, but there is actually no scientific evidence either way. And there are a few problems both with free will and self-awareness. For one thing, there is absolutely no mechanism for self-awareness in Physics. It seems to be an extra-physical thing. In Physics, the whole is not and cannot be more than the sum of its parts. Ther
Re: (Score:2)
You are welcome to be a mindless p-zombie (as you just basically said that you cannot have insights, you are one), but I am not one of those.
Re: (Score:2)
You've claimed it. Lots of people do. Now prove it.
Research has shown that our subjective assessment of our insight, use of logic, self-awareness and free will is a gross overestimate, at best. That's not to say we don't actually do those things, but there isn't really any proof that we do. And some of them are pretty problematic from what we know of physics.
Re: (Score:2)
Why should humans be the point of reference?
Re: (Score:2)
These are established definitions. There is nothing "odd" about them.
But just to give you an intuition: Weak AI cannot plan, cannot judge, cannot explore and cannot in fact do anything that requires "initiative", "insight" or "understanding". It can just, given a large set of example decisions that share strong statistical characteristics, perform more of the same decisions so that they are similar to the decisions it got as input. In some cases this is enough to fake intelligence to a degree, but there is
Re:Didn't we all assume that? (Score:4, Informative)
"But just to give you an intuition: Weak AI cannot plan, cannot judge, cannot explore"
Some machine learning control systems are explicitly designed to form and execute plans. The ones that are designed to be trained in the real world are usually also designed to learn to do internal simulations (imagining what will happen if I do X) because feedback in the real world is slow.
Judging is pretty much what machine learning systems do.
Many reinforcement learning methods have a hyperparameter to explicitly control the amount of exploration the system does.
Are you sure you're not thinking of 1960's era tech?
Re: (Score:2)
Seriously? Read at least an introductory text before disgracing yourself utterly.
Re: (Score:2)
Strong AI does not exist.
Which is to say, "something that I define as not existing" does not exist.
Profound.
Re: (Score:2)
Doesn't everyone here already know this?
Yes. TFA is just stating the obvious: AI is based on technology, and not magic. Deep learning is pattern recognition and statistical classification. Real life isn't like the movies. Duh.
It really is like human intelligence. (Score:4, Insightful)
The human brain sees pattern everywhere it looks too.
I'm retired now but I've been doing a lot of reading and experimentation with decision tree based classification methods. I like these because the produce models you can examine critically, as opposed to so-called "deep learning" algorithms which produce results that you pretty much have to judge by their giving you the result you expect. It's not that that isn't useful in some cases, but I don't find it as interesting.
Re:It really is like human intelligence. (Score:5, Insightful)
The human brain sees pattern everywhere it looks too.
Yep. Pattern identification system identifies patterns, news at 11.
OTOH, the ones I find interesting are the cases where ML identifies patterns that humans might not be able to identify. Sometimes this is less interesting for the potential to use a machine to identify these patterns than for the indication that patterns exist where we might think they don't.
The recent paper on ML "gaydar", where researchers trained a machine to identify sexual orientation from dating site photos is a potentially-fascinating example. In that one I suspect the algorithm was mostly keying on things like hairstyle and other indicators that the people in the photos might be deliberately using to advertise their orientation (this was a dating site), but there seems to be some evidence that facial structure also played a significant role. Of course, given that the most common genders and orientations (cisgendered heterosexuals) are clearly strongly correlated with certain characteristic facial structures, it's certainly reasonable to expect that the same genetic and development processes that determine gender and orientation and clearly affect face structure in the common cases should also affect face structure in the less common cases. But humans can't really see these patterns. Absent behavioral clues, people have very bad "gaydar". But that doesn't mean the patterns aren't there, just that we can't see them. If ML can see strong correlations between facial structure and orientation (and I don't think this one study proves that), that tells us something about the nature of sexual orientation and the degree to which it is expressed in the body.
Re:It really is like human intelligence. (Score:4, Insightful)
On one level, this study showed that you can correlate a conclusion (gay/not gay) with some pattern in the data. The problem is that it has only recognized a pattern in that particular data set.
The article mentions the factors were probably grooming and how the person posed for the picture. Okay, that might be somewhat useful if you are trying to guess sexual preference from a picture on a dating site. But not necessarily, because they threw out sample pictures that didn't provide the clues they were looking for, and (I suspect) selected test pictures that did have the clues.
Re:It really is like human intelligence. (Score:4, Interesting)
The usual methodology for training is you start with a big sample of data and you randomly divide the data into records into two subsets; the first you use to train the model and the second you use to test the results of the training.
If there is no statistical difference between your training and testing groups, a better-than-random performance on the test data indicates that your algorithm actually learned something about the original universe of data. At that point you have the same problem you always have in statistics when you try to use your results: is some set of data you encounter in the wild so to speak really comparable to the data you build the model on?
One of the advantages of regression learning is that a classification your model produces is rebuttable. This is very important in a world where some courts are using proprietary software in sentencing to classify people by how likely they are to re-offend.
Re: (Score:2)
The usual methodology for training is you start with a big sample of data and you randomly divide the data into records into two subsets; the first you use to train the model and the second you use to test the results of the training.
Understood. But the population from which they drew the sample set was already filtered by humans for certain characteristics. So (as I said) what they found was a pattern in their carefully selected sample. No different than training a computer to distinguish between a square and a circle by showing it pictures of both.
Re: (Score:2)
Right; I was just clarifying: nothing about the computerized methodology fixes the problems people have always had with statistical results, which is determining whether you are looking at a comparable sample to the one you obtained those results with.
Re: (Score:3)
Good luck sorting out when the pattern is being matched wrong
even out of costume i would bet Male (hetro) Ballet Dancers might be miss-classed and a bit of thought could come up with cases where all four? possible options could be chosen wrong ( i think the set is MG ,MS , FG and FS)
Re: (Score:3)
A working knowledge of probability and statistics is probably more important than a working knowledge of calculus.
Many, many years ago when I was a student I took the famously difficult stochastic processes course at MIT. Recently I went through MIT Open Courseware's 18.05 lectures, and was amazed by how much the teaching of probability and statistics has changed... and for the better.
Re: (Score:2)
'Prob and stats' pre calc is just a memorize and regurgitate course. It's impossible to understand where the formulas come from without calc, so the students can only memorize and plug and chug.
I took both the pre and post calc stats courses. First was easy, but not satisfying of need to understand.
So we're teaching Apophenia to computers now (Score:2)
Or at least headlines are trying to achieve it.
Maybe we need to come up with a new word to describe when computers are used to generate meaningless connections between events. Or maybe we can add to Pareidolia's definition the idea that it can also find meaning in random information as well as sights and sounds. Oooo! That's it. Meta-Pareidolia. Finding meaning in random stimuli or information.
Who's AL? (Score:1)
Who's this Albert person we keep hearing about? And maybe he'll even introduce a font that has unambiguous uppercase/lowercase letters.
Re: (Score:2)
Re: (Score:2)
The AI's are coming! The AI's are coming! (Score:1)
Re: (Score:3)
Re: (Score:1)
It may not yet seem human-like enough for you to be classified as intelligent behavior, but there is a threshold that will be passed probably still during this century.
I used to work as a video game tester. I'm well familiar on human-like artificial idiocy can be. ;)
Re: (Score:1)
On? On what, you fucking natural idiot? ;)
Slashdot. Or were you expecting Reddit?
Re: (Score:2)
Trash science (Score:3)
In the past it used to be stuff like "Scientists find that painting your room yellow leads to cancer!", now it's the same with AI. Turns out flipping through large amounts of statistical data until you happen upon a correlation is easily automated, and trash scientists will soon have to worry about their jobs.
Re: (Score:2)
My favorite was a newspaper article I read sometime in the late 80's announcing: "Scientists discover that all food causes cancer!"
It was amusing, but it was also a pretty good article explaining that if you took a bunch of lab rats and raised them on a diet of a concentrated form of one specific type of food (and allowed them to eat nothing else), they pretty much always developed cancer and died at a significantly higher rate. It then pointed out that a number of recent articles claiming various foods cau
I'm Impressed! (Score:1)
Nothing ne here, really (Score:2, Troll)
Re: (Score:2)
I think by now most slashdotters realize that Elon is a bit of an asshat. In theory, military leaders could decide to put too much faith in AI-driven tanks, bombers, etc. too soon, but while it may make an interesting plot device in a book or a movie, IRL military leaders are much more risk-averse (which is definitely a good thing).
When it comes to AI, I would be more worried about terrorists trying to build a deadly version of this:
https://boingboing.net/2012/03... [boingboing.net]
It's not like a suicide bomber would consi
And when it notices patterns.. (Score:3)
AI notices patterns that detect cancer. Woot! AI notices patterns that determines crime rates amongst certain population groups! Fuq no. (I can't wait until someone calls AI sexist for it noticing that females get pregnant much more often than males)
Re: (Score:2)
Re: (Score:2)
Hey, cool dog whistle, bro!
Any algorithm (including AI and machine learning based algorithms) is only as good as the data you feed it. You can cite crime statistics for "certain populations", and to you, that looks like evidence for your shitty racist agenda. To me, it look a lot like evidence of the systemic over-policing and disproportional enforcement on those same com
Re: (Score:2)
AI notices patterns that determines crime rates amongst certain population groups! Fuq no.
I don't think that the problem is necessarily detecting higher crime rates among a certain population group. The problem is using that correlation to draw the conclusion, "Therefore those people are naturally more likely to commit crime."
Even if that conclusion isn't explicitly spelled out, it's still a problem when raw statistics are interpreted to be support for bigotry. If a scientist is studying crime statistics among specific population groups, they should be careful in how they formulate the study,
Studies are not the problem. (Score:3)
Studies simply exist to inform others of a topic of interest. The problem is not the studies being scary, the problem is that highly technical information is inappropriately being repackaged and pushed out to the general public who have no insight into topic. The problem isn't the message itself, it's the messengers.
Re: (Score:2)
Re: (Score:2)
The problem isn't the message itself, it's the messengers.
I agree with that statement, but understood in its widest sense: not just the target audience, also a big proportion of people performing these studies. Finding reliable enough trends to get truly worthy insights into a wide variety of phenomena is certainly possible. The problem is that getting that ideal result in quite a few scenarios is really difficult; it requires lots of knowledge (including high quality information), objectivity and resources which are rarely available.
Science is an error correction process. To a first order approximation (and probably to a third-order approximation), every scientific paper ever published is wrong. The reason science works is that the iterated process of conjecture and criticism -- in the broadest meaning of "criticism", which includes experimental testing, and lots more -- gradually identifies and weeds out errors. So the science on any given topic asymptotically approaches correctness in the long run, but the initial efforts look more l
Re: (Score:2)
Re: (Score:2)
You and the parent poster are mostly focused on what, IMO, is just a consequence of the problem: the non-specialised press, PR, public opinion, etc. You should ask yourselves another question: why are these people who aren't in a position to adequately understand partial outputs being given those at all? Because certain scientific subfields aren't precisely being managed in a too scientific way and this the real problem: objective correctness, critical attitudes, long-term expectations, growth of the scientific knowledge as a whole, etc. are being ignored on exchange of funding or some minutes of fame.
That does not follow. You're arguing that in order to be objective, have properly critical attitudes, etc. scientists need to keep their results secret, or at least avoid letting anyone not sufficiently versed in the relevant science to know about them. The one thing has nothing to do with the other.
Scientists should publish detailed papers with accurate abstracts. It's not their job to withhold information from others who they don't think are able to understand it. Indeed, trying to decide who should and
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It's just marketing wank (Score:2)
"AI" has simply joined "Web 2.0", "Cloud", and countless other terms that are meaningless buzzwords for use by marketing departments.
Re: (Score:2)
But will AI be Smart?
Re: (Score:2)
Of course! CyberSmart.
And I have a program... (Score:2)
And I have a program in LISP that I wrote 30 years ago that has been saying the human race won't live past this week, every week, for three decades.
This is proof that we live in a virtual universe, probably written in Brainfuck.
Biological (Score:1)
I think the problem is (Score:1)
a. Prices will go up as ML makes it easier to figure out the maximum you can get away with charging and control/constrict supply chains.
b. Wages will go down as ML increases productivity per working.
c. The 1% will use ML to skim even more off the economy leaving the rest of us with less and less
When I was a kid there were di
Machine Learning can be dropped from title (Score:2)
Fixed it for you.
Maschine Learning is not Maschine Learning (Score:2)
There are trainable models such AS neural networks/deep learning and statistical models. There are also models which habe to be configured. We can even combine them. In the end this is all just classification mechanisms. They are all good at detecting known issues. They are not able to come up with new stuff.