Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Math Stats Science

Weak Statistical Standards Implicated In Scientific Irreproducibility 182

ananyo writes "The plague of non-reproducibility in science may be mostly due to scientists' use of weak statistical tests, as shown by an innovative method developed by statistician Valen Johnson, at Texas A&M University. Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false (PDF). He advocates for scientists to use more stringent P values of 0.005 or less to support their findings, and thinks that the use of the 0.05 standard might account for most of the problem of non-reproducibility in science — even more than other issues, such as biases and scientific misconduct."
This discussion has been archived. No new comments can be posted.

Weak Statistical Standards Implicated In Scientific Irreproducibility

Comments Filter:
  • Economic Impact (Score:3, Insightful)

    by Anonymous Coward on Tuesday November 12, 2013 @08:00PM (#45407269)

    Truth is expensive.

  • Re:Or you know.. (Score:5, Insightful)

    by hde226868 ( 906048 ) on Tuesday November 12, 2013 @08:01PM (#45407273) Homepage
    The problem with frequentist statistics as used in the article is that its "recipe" character often results in people using statistics that do not understand its limitations (a good example is assuming a normal distribution when there is none). The bayesian approach does not suffer from this problem, also because it forces you to think a little bit more about the problem you are trying to solve compared to the frequentist approach. But that's also the problem with the cited article. Just remaining in the framework and going towards more discriminating thresholds is not really a solution of the problem that people do not understand their data analysis (a p-value based on the wrong distribution remains meaningless, even if you change your threshold...). Because it is more logical in its setup, the danger of making such mistakes is smaller in bayesian statistics. The telescoper over at http://telescoper.wordpress.com/2013/11/12/the-curse-of-p-values/ [wordpress.com] has a good discussion of these issues.
  • by Anonymous Coward on Tuesday November 12, 2013 @08:02PM (#45407281)

    If we were to insist on statistically meaningful results 90% of our contemporary journals would cease to exist for lack of submissions.

  • by Michael Woodhams ( 112247 ) on Tuesday November 12, 2013 @08:06PM (#45407325) Journal

    Personally, I've considered results with p values between 0.01 and 0.05 as merely 'suggestive': "It may be worth looking into this more closely to find out if this effect is real." Between 0.01 and 0.001 I'd take the result as tentatively true - I'll accept it until someone refutes it.

    If you take p=0.04 as demonstrating a result is true, you're being foolish and statistically naive. However, unless you're a compulsive citation follower (which I'm not) you are somewhat at the mercy of other authors. If Alice says "In Bob (1998) it was shown that ..." I'll tend to accept it without realizing that Bob (1998) was a p=0.04 result.

    Obligatory XKCD [xkcd.com]

  • by Anonymous Coward on Tuesday November 12, 2013 @08:11PM (#45407369)

    Authors need to read this: http://www.deirdremccloskey.com/articles/stats/preface_ziliak.php
    It explains quite clearly why a p value 0.05 is a fairly arbitrary choice as it cannot possibly the standard for every possible study out there. Or, put it another way, be very skeptical when one sole number (namely 0.05) is supposed to be a universal threshold to decide on the significance of all possible findings, in all possible domains of science. The context of any finding still matters for its significance.

  • by hawguy ( 1600213 ) on Tuesday November 12, 2013 @08:21PM (#45407453)

    If the author's assertion is true and that P value of 0.05 or less means that 17–25% of such findings are probably false, then what is the point of publishing the findings? Or at least come at the writting from a more sober perspective. Of course, any such change would need to come with an academia culture change from the 'publish or perish' mindset.

    Because I'd rather use a drug found to be 75-83% effective at treating my disease than die while waiting for someone to come up with one that's 99.9% effective.

  • by Will.Woodhull ( 1038600 ) <wwoodhull@gmail.com> on Tuesday November 12, 2013 @08:40PM (#45407613) Homepage Journal

    Unless of course we happen to be working in a chaotic system where strange attractors mean there can be no centrality to the data.

    Chaos theory is a lot younger than the central limit theorem. The situation might be similar to the way Einstein's theory of relativity has moved Newton's three laws from a position of central importance in all physics to something that works well enough in a small subset. A subset that is extremely important in our daily life, but still a subset.

    Some portions of a chaotic system will be consistent with what the central limit theorem would predict. Other data sets from the same system, uh, no.

    An important question I do not believe has been answered yet (I am an armchair follower of this stuff, neither expert nor student) is whether all the systems we work with where the CLT does seem to hold are merely subsets of larger systems. A related question would be whether there is any test that can be applied to a discrete data set that rule out its being a subset of a larger chaotic process.

  • by Anubis IV ( 1279820 ) on Tuesday November 12, 2013 @08:42PM (#45407623)

    ...and nothing of value would be lost. Seriously, have you read the papers coming from that 90% of journals and conference proceedings outside of the big ones in $field_of_study? The vast majority of them suck, have extraordinarily low standards, and are oftentimes barely readable. There's a reason why the major conferences/journals that researchers actually pay attention to routinely turn away between 80-95% of papers being submitted: it's because the vast majority of research papers are unreadable crap with marginal research value being put out to bolster someone's published paper count so that they can graduate/get a grant/attain tenure.

    If the lesser 90% of journals/conferences disappeared, I'd be happy, since it'd mean wading through less cruft to find the diamonds. I still remember doing weekly seminars with my research group in grad school, where we'd get together and have one person each week present a contemporary paper. Every time one of us tried to branch out and use a paper from a lesser-known conference (this was in CS, where the conferences tend to be more important than the journals), we ended up regretting it, since they were either full of obvious holes, incomplete (I once read a published paper that had empty Data and Results sections...just, nothing at all, yet it was published anyway), or relied on lots of hand-waving to accomplish their claimed results. You want research that's worth reading, you stick to the well-regarded conferences/journals in your field, otherwise the vast majority of your time will be wasted.

  • by Will.Woodhull ( 1038600 ) <wwoodhull@gmail.com> on Tuesday November 12, 2013 @08:51PM (#45407703) Homepage Journal

    Agreed. P = 0.05 was good enough in my high school days, when handheld calculators were the best available tool in most situations, and you had to carry a couple of spare nine volt batteries for the thing if you expected to keep it running through an afternoon lab period.

    We have computers, sensors, and methods for handling large data sets that were impossible to do anything with back in the day before those first woodburning "minicomputers" of the 1970s. It is ridiculous that we have not tightened up our criteria for acceptance since those days.

    Hell, when I think about it, using P = 0.05 goes back to my Dad's time, when he was using a slide rule while designing engine parts for the SR-71 Blackbird. That was back in the 1950s and '60s. We should have come a long way since then. But have we?

  • by ganv ( 881057 ) on Tuesday November 12, 2013 @10:11PM (#45408285)
    At one level, they are right that unreproducible results are usually not fraud, but are simply fluctuations that make a study look promising leading to publication. But raising the standard of statistical significance will not really improve the situation. The most important uncertainties in most scientific studies are not random. You can't quantify them assuming a gaussian distribution. There are all kind of choices made in acquiring, processing, and presenting data. The incentives that scientists have are all pushing them to look for ways to obtain a high profile result. We make our best guesses trying to be honest, but when a set of guesses leads to a promising result we publish it and trust further study to determine whether our guesses were fully justified. There is one step that would improve the situation. We need to provide a mechanism to receive career credit for reproducing earlier results or for disproving earlier results. At the moment, you get no credit for doing this. And you will never get funding to do it. The only way to be successful is to spit out a lot of papers and have some of them turn out to be major results that others build on. The number of papers that turn out to be wrong is of no consequence. No one even notices except a couple of researchers who try to build on your result, fail, and don't publish. In their later papers they will probably carefully dance around the error so as not to incur the wrath of a reviewer. If reproducing earlier results was a priority, then we would know earlier which results were wrong and could start giving negative career credit to people who publish a lot of errors.
  • by j33px0r ( 722130 ) on Tuesday November 12, 2013 @10:50PM (#45408553)

    This is a geek website, not a "research" website so stop talking a bunch of crap about a bunch of crap. I'm providing silly examples so don't focus upon them. Most researchers suck at stats and my attempt at explaining should either help out or show that I don't know what I'm talking about. Take your pick.

    "p=.05" is a stat that reflects the likelihood of rejecting a true null hypothesis. So, lets say that my hypothesis is that "all cats like dogs" and my null hypothesis is "not all cats like dogs." If I collect a whole bunch of imaginary data, run it through a program like SPSS, and the results turn out that my hypothesis is correct then I have a .05 percent chance that the software is wrong. In that particular imaginary case, I would have committed a Type I Error. This error has a minimal impact because the only bad thing that would happen is some dogs get clawed on the nose and a few cats get eaten.

    Now, on a typical experiment, we also have to establish beta which is the likelihood of committing a type II error, that is, accepting a false null hypothesis. So let's say that my hypothesis is that "Sex when desired makes men happy" and my null hypothesis is "Sex only when women want it makes men happy." It's not a bad thing if #1 is accepted but the type II error will make many men unhappy.

    Now, this is a give and take relationship. Every time that we make p smaller (.005, .0005, .00005, etc.) for "accuracy," then the risk of committing a type II error increases. A type II error when determining what games 15 year olds like to play doesn't really matter if we are wrong but if we start talking about drugs and false positives then the increased risk of a type II error really can make things ugly.

    Next, there are guideline for determining a how many participants are needed for lower p (alpha) values. Social sciences (hold back your Sheldon jokes) that do studies on students might need lets say 35 subjects/people per treatment group at p=.05 whereas with a .005 might need 200 or 300 per treatment group. I don't have a stats book in front of me but .0005 could be in the thousands. Every adjustment impacts a different item in a negative fashion. You can have your Death Star or you can have Luke Skywalker. Can't have 'em both.

    Finally, there is a statistical concept of power, that is, there are stats for measuring the impact of a treatment. Basically, how much of the variance between the group A and group B can be assigned to the experimental treatment. This takes precedence in many peoples minds over simply determining if we have a correct or incorrect hypothesis. Assigning p does not answer this.

    Anyways, I'm going to go have another beer. Discard this article and move onto greener pastures.

  • Re:Or you know.. (Score:5, Insightful)

    by Daniel Dvorkin ( 106857 ) on Tuesday November 12, 2013 @11:27PM (#45408853) Homepage Journal

    The problem with frequentist statistics as used in the article is that its "recipe" character often results in people using statistics that do not understand its limitations (a good example is assuming a normal distribution when there is none). The bayesian approach does not suffer from this problem, also because it forces you to think a little bit more about the problem you are trying to solve compared to the frequentist approach.

    If only. The number of people who think "sprinkle a little Bayes on it" is the solution to everything is frighteningly large, and growing exponentially AFAICT. There's now a Bayesian recipe counterpart to just about every non-Bayesian recipe, and the only difference between them, as a practical matter, is that the people using the former think they're doing something special and better. One might say that their prior is on the order of P(correct|Bayes) = 1, which makes it very hard to convince them otherwise ...

  • by Anonymous Coward on Tuesday November 12, 2013 @11:29PM (#45408871)

    Agreed. P = 0.05 was good enough in my high school days, when handheld calculators were the best available tool in most situations

    Um, the issue is not that it is difficult to calculate P-values less than 0.05. Obtaining a low p-value requires either a better signal to noise ratio in the effect you're attempting to observe, or more data. Improving the signal to noise ratio is done by improving experimental design, removing sources of measurement error like rater reliability, measurement noise, covariates, etc. It should be done to the extent feasible, but you can't wave a magic wand and say "computers" to fix it. Likewise, data collection is also expensive, and if you have to have an order of magnitude more subjects, it will substantially raise the cost of doing research.

    There does exist a tradeoff between research output and research quality. It may be (I think so at least) that we ought to push the bar a bit toward quality over quantity, but there is a cost. In the extreme, we might miss out on many discoveries because we could only afford the time and cost of going after a handful of sure things.

  • by umafuckit ( 2980809 ) on Wednesday November 13, 2013 @12:31AM (#45409273)

    We have computers, sensors, and methods for handling large data sets that were impossible to do anything with back in the day before those first woodburning "minicomputers" of the 1970s. It is ridiculous that we have not tightened up our criteria for acceptance since those days.

    But that stuff isn't the limiting factor. The limiting factor is usually getting enough high quality data. In certain fields that's very hard because measurements are hard or expensive to make and the signal to noise is poor. So you do the best you can. This is why criteria aren't tighter now than before: because stuff at the cutting edge is often hard to do.

"Floggings will continue until morale improves." -- anonymous flyer being distributed at Exxon USA

Working...