Weak Statistical Standards Implicated In Scientific Irreproducibility 182
ananyo writes "The plague of non-reproducibility in science may be mostly due to scientists' use of weak statistical tests, as shown by an innovative method developed by statistician Valen Johnson, at Texas A&M University. Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false (PDF). He advocates for scientists to use more stringent P values of 0.005 or less to support their findings, and thinks that the use of the 0.05 standard might account for most of the problem of non-reproducibility in science — even more than other issues, such as biases and scientific misconduct."
Five Sigma or Bust (Score:2)
Re:Five Sigma or Bust (Score:4, Interesting)
Five sigma is the standard of proof in Physics. The probability of a background fluctuation is a p-value of something like 0.0000006.
Of proof yes... that makes sense.
Other fields should probably use a threshold of 0.005 or 0.001.
If they use move to five sigma....... 2013 might be the last year that scientists get to keep their jobs.
What are you supposed to do; if no research in any field is admissable, because the bar is so high noone can meet it, even with meaningful research?
Re:Five Sigma or Bust (Score:4, Insightful)
Agreed. P = 0.05 was good enough in my high school days, when handheld calculators were the best available tool in most situations, and you had to carry a couple of spare nine volt batteries for the thing if you expected to keep it running through an afternoon lab period.
We have computers, sensors, and methods for handling large data sets that were impossible to do anything with back in the day before those first woodburning "minicomputers" of the 1970s. It is ridiculous that we have not tightened up our criteria for acceptance since those days.
Hell, when I think about it, using P = 0.05 goes back to my Dad's time, when he was using a slide rule while designing engine parts for the SR-71 Blackbird. That was back in the 1950s and '60s. We should have come a long way since then. But have we?
Re: (Score:2)
Hell, when I think about it, using P = 0.05 goes back to my Dad's time, when he was using a slide rule while designing engine parts for the SR-71 Blackbird. That was back in the 1950s and '60s. We should have come a long way since then. But have we?
In engineering? Yes [rt.com]. Science? Well...
Re: (Score:2, Insightful)
Agreed. P = 0.05 was good enough in my high school days, when handheld calculators were the best available tool in most situations
Um, the issue is not that it is difficult to calculate P-values less than 0.05. Obtaining a low p-value requires either a better signal to noise ratio in the effect you're attempting to observe, or more data. Improving the signal to noise ratio is done by improving experimental design, removing sources of measurement error like rater reliability, measurement noise, covariates, etc. It should be done to the extent feasible, but you can't wave a magic wand and say "computers" to fix it. Likewise, data collect
Re:Five Sigma or Bust (Score:4, Insightful)
We have computers, sensors, and methods for handling large data sets that were impossible to do anything with back in the day before those first woodburning "minicomputers" of the 1970s. It is ridiculous that we have not tightened up our criteria for acceptance since those days.
But that stuff isn't the limiting factor. The limiting factor is usually getting enough high quality data. In certain fields that's very hard because measurements are hard or expensive to make and the signal to noise is poor. So you do the best you can. This is why criteria aren't tighter now than before: because stuff at the cutting edge is often hard to do.
Re: (Score:2)
As others have said, the problem in many cases is not computational power, but expense or difficulty or even ethics of getting a large data set. What about a medical trial -- do want to necessitate giving some experimental medicine to 10,000 people before assessing whether it's a good idea or not?
Re: (Score:2)
do want to necessitate giving some experimental medicine to 10,000 people before assessing whether it's a good idea or not?
We have 40,000 patent attorneys in the US alone, so there's one great sample for experimental medicine. Not very heterogeneous, but will do in a pinch.
Re: (Score:3)
do want to necessitate giving some experimental medicine to 10,000 people before assessing whether it's a good idea or not?
If the drug is going to be prescribed to millions of people a year: yes, probably. If not during a phase III trial than during a phase IV trial that begins as soon as the drug goes on the market. The reason being that while efficacy can be extrapolated from smaller trials safety is all about the outliers. A few excess deaths in a trial of several thousand could easily mean that the drug causes more harm than good overall, or that an identifiable patient subgroup can't tolerate the drug.
Well yes, actually (Score:2)
> do want to necessitate giving some experimental medicine to 10,000 people before assessing whether it's a good idea or not?
Yes. Before giving it to a million people, we should run statistical calculations on the first 10,000 to better asses safety and efficacy.
Oh, you meant as opposed to a trial with 200 people. But that's a false dichotomy. You run run stats on the first 200 to see whether
or not it's likely safe, then run stats on 10,000 to confirm it. Which is to say, you'd wait until you managed
Re: (Score:2)
"In the meantime, with a P of 0.05, you'd label it as a tentative conclusion, a likely theory."
Great, so we agree: Getting a P of less than 0.05 with a sample of 200 gets you published and other machinery acting on that signal.
Re: (Score:2)
"What are you supposed to do; if no research in any field is admissable, because the bar is so high noone can meet it, even with meaningful research?"
James Cameron could reach the bar.
Re: (Score:2)
James Cameron could reach the bar.
Hm... James Cameron is a deep-sea explorer, and film director.... he directed Titanic.
In what way, does that make him a researcher who could be sure of meeting five sigma in all his research; even when infeasible truly massive datasets would be required?
Re: (Score:2)
If I'm publishing that drug X does not increase the incidence of spontaneous human combustion, there ought to be a lot of zeroes in that P value. If I'm publishing that "As expected, Protein X does job Y in endangered species Z, which is not surprising given that protein X does job Y in every other species tested, and why the hell did we even do this experiment" then maybe you don't need such a high standard.
Re:Five Sigma or Bust (Score:4, Interesting)
I work in this field and usually see power calculations recommending samples of non-viable size.
I can see recruiting hundreds of subjects as being feasible in the US or a large european country but in smaller countries one simply has to state clearly in a paper's limitations that any findings must be interpreted in light of the available sample.
Re: (Score:2)
Scarcely productive (Score:5, Interesting)
Such an admonishment is fine for the computational fields, where a few more permutations can net you a p-value of 0.0005 (assuming that you aren't crunching on a 4-month cluster problem). However, biological laborations are often very expensive and take a lot of time. Furthermore, additional tests are not always possible, since it can be damn hard to reproduce specific mutations or knockout sequences without altering the surrounding interactive factors.
So, should we go for a better p-value for the experiment and scrap any complicated endeavour, or should we allow for difficult experiments and take it with a grain of salt?
Re:Scarcely productive (Score:5, Insightful)
If the author's assertion is true and that P value of 0.05 or less means that 17–25% of such findings are probably false, then what is the point of publishing the findings? Or at least come at the writting from a more sober perspective. Of course, any such change would need to come with an academia culture change from the 'publish or perish' mindset.
Because I'd rather use a drug found to be 75-83% effective at treating my disease than die while waiting for someone to come up with one that's 99.9% effective.
Re: (Score:3, Informative)
This is a fallacious understanding of p-value.
Something closer to (but still not quite) correct would be: that there is a 75-83% chance that the claimed efficacy of the drug is within the stated error bars. For example, there may be a 75-83% chance that the drug is between 15% and 45% effective at treating your disease.
That's much worse, isn't it?
Re: (Score:2)
More fun: a 75-83% chance the drug caused a 15 to 45% change in a proxy, i.e., a biomarker, which is itself imperfectly correlated with your disease. Data on whether the drug actually improves lifespan/quality of life for your cohort of chronic disease patients should be back in another decade or so ...
No wonder people hate evidence based medicine and would rather have someone just Reiki it away.
Re:Scarcely productive (Score:4, Interesting)
And more importantly, a 17-25% chance that it's completely ineffective, no better than a placebo.
My sister went through 4 different drugs before she found one that made her condition better. One made her (much) worse.
Yet she likely wouldn't be alive today if none of those 4 drugs worked.
Re: (Score:2)
My sister went through 4 different drugs before she found one that made her condition better. One made her (much) worse.
Yet she likely wouldn't be alive today if none of those 4 drugs worked.
And some people die before they try the 4th drug, because the first 3 weren't tested properly. I understand you love your sister, but seriously, one data point is not a good way to understand statistics.
But if doctors weren't allowed to publish results where there's a relatively weak correlation to a positive outcome, how would her doctors have known "Hey, look, this other drug is effective for some people, lets give it a try"? They already tried the 2 drugs that were the most promising, the 3rd (the one that put her in ICU for a week) was a relative longshot, as was the 4th. Yet if the literature hadn't shown that all of those drugs were used with some moderate success, how would they know known to try th
Re: (Score:2)
Re: (Score:2)
The problem becomes when you're treating a non-life threatening ailment with a drug that turns out to:
1) Not help at all, ever.
2) Has other, life-threatening side-effects.
Re: (Score:2)
If the chance of any result being found by chance is 20%, it stands to reason that about 20% of all results were found by chance and, therefore, not to be expected to be reproducible.
Statistical significant levels only tell us about the chance of a study producing a false positive, they say nothing about the chance of a study producing a true positive.
So if the chance of a true positive is low then the false positives could easilly outnumber the true positives.
Re: (Score:2)
It's not an assertion, it's basic math--0.05 is 20%
.
0.05 is 2%, not 20%
oops, my bad (Score:2)
Ahhh!! it's 1/20, not two percent. Of course, it's 5%.
Economic Impact (Score:3, Insightful)
Truth is expensive.
Re: (Score:2)
but producing shitty studies pays just as well as producing good studies, especially if the study is about something of no consequence at all to anyon..
Not going to happen (Score:4, Insightful)
If we were to insist on statistically meaningful results 90% of our contemporary journals would cease to exist for lack of submissions.
Re:Not going to happen (Score:4, Insightful)
...and nothing of value would be lost. Seriously, have you read the papers coming from that 90% of journals and conference proceedings outside of the big ones in $field_of_study? The vast majority of them suck, have extraordinarily low standards, and are oftentimes barely readable. There's a reason why the major conferences/journals that researchers actually pay attention to routinely turn away between 80-95% of papers being submitted: it's because the vast majority of research papers are unreadable crap with marginal research value being put out to bolster someone's published paper count so that they can graduate/get a grant/attain tenure.
If the lesser 90% of journals/conferences disappeared, I'd be happy, since it'd mean wading through less cruft to find the diamonds. I still remember doing weekly seminars with my research group in grad school, where we'd get together and have one person each week present a contemporary paper. Every time one of us tried to branch out and use a paper from a lesser-known conference (this was in CS, where the conferences tend to be more important than the journals), we ended up regretting it, since they were either full of obvious holes, incomplete (I once read a published paper that had empty Data and Results sections...just, nothing at all, yet it was published anyway), or relied on lots of hand-waving to accomplish their claimed results. You want research that's worth reading, you stick to the well-regarded conferences/journals in your field, otherwise the vast majority of your time will be wasted.
Interpretation of the 0.05 threshold (Score:5, Insightful)
Personally, I've considered results with p values between 0.01 and 0.05 as merely 'suggestive': "It may be worth looking into this more closely to find out if this effect is real." Between 0.01 and 0.001 I'd take the result as tentatively true - I'll accept it until someone refutes it.
If you take p=0.04 as demonstrating a result is true, you're being foolish and statistically naive. However, unless you're a compulsive citation follower (which I'm not) you are somewhat at the mercy of other authors. If Alice says "In Bob (1998) it was shown that ..." I'll tend to accept it without realizing that Bob (1998) was a p=0.04 result.
Obligatory XKCD [xkcd.com]
Re: (Score:2)
Obligatory XKCD [xkcd.com]
FWIW, tests like the Tukey HSD ("Honestly Statistically Different") are designed to avoid that problem.
I suspect that's how the much-discussed "Jupiter Effect" for astrology came about: Throw in a big pile of names and birth signs, turn the crank, and watch a bogus correlation pop out.
Re: (Score:2)
Re: (Score:2)
Say 1000 hypotheses are tested, and that 10% are true - that is, 100 true hypotheses, and 900 false ones. If the false positive rate is 5%, then 45 of the 900 will end up true. Further, let's say there's a false negative rate of 10%. So of the 100 true hypotheses, 1
Obligatory XKCD (Score:2, Funny)
http://xkcd.com/882/
A universal standard for significance... (Score:3, Insightful)
Authors need to read this: http://www.deirdremccloskey.com/articles/stats/preface_ziliak.php
It explains quite clearly why a p value 0.05 is a fairly arbitrary choice as it cannot possibly the standard for every possible study out there. Or, put it another way, be very skeptical when one sole number (namely 0.05) is supposed to be a universal threshold to decide on the significance of all possible findings, in all possible domains of science. The context of any finding still matters for its significance.
The Economist just had an article on this (Score:3)
Unreliable research
Trouble at the lab
Scientists like to think of science as self-correcting. To an alarming degree, it is not
Oct 19th 2013 |From the print edition
The Economist
First, the statistics, which if perhaps off-putting are quite crucial. Scientists divide errors into two classes. A type I error is the mistake of thinking something is true when it is not (also known as a “false positive”). A type II error is thinking something is not true when in fact it is (a “false negative”). When testing a specific hypothesis, scientists run statistical checks to work out how likely it would be for data which seem to support the idea to have come about simply by chance. If the likelihood of such a false-positive conclusion is less than 5%, they deem the evidence that the hypothesis is true “statistically significant”. They are thus accepting that one result in 20 will be falsely positive—but one in 20 seems a satisfactorily low rate.
In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” As he told the quadrennial International Congress on Peer Review and Biomedical Publication, held this September in Chicago, the problem has not gone away.
Dr Ioannidis draws his stark conclusion on the basis that the customary approach to statistical significance ignores three things: the “statistical power” of the study (a measure of its ability to avoid type II errors, false negatives in which a real signal is missed in the noise); the unlikeliness of the hypothesis being tested; and the pervasive bias favouring the publication of claims to have found something new.
http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble [economist.com]
What is a p Value? (Score:2)
A significant problem is that many of the people who quote p values do it without understanding what a p value actually means. Getting p = 0.05 does not mean that there is only a 5% chance that the model is wrong. That is one of the fundamental misunderstandings in statistics, and I suspect that it is behind a lot of the cases of scientific irreproducibility.
I defer to Feynman (Score:2)
The real issue (Score:5, Interesting)
Okay, here's the real problem with scientific studies.
All science is data compression, and all studies are are intended to compress data so that we can make future predictions. If you want to predict the trajectory of a cannonball, you don't need an almanac cross referencing cannonball weights, powder loads, and cannon angles - you can calculate the arc to any desired accuracy with a set of equations that fit on half a page. The half-page compresses the record of all prior experience with cannonball arcs, and allows us to predict future arcs.
Soft science studies typically make a set of observations which relate two measurable aspects. When plotted, the data points suggest a line or curve, and we accept the linear-regression (line or polynomial) as the best approximation for the data. The theory being that the underlying mechanism is the regression, and unrelated noise in the environment or measurement system causes random deviations of observation.
This is the wrong method. Regression is based on minimizing squared error, which was chosen by Laplace for no other reason that it is easy to calculate. There's lots of "rationalization" explanations of why it works and why it's "just the best possible thing to do", but there's no fundamental logic that can be used to deduce least squares from from fundamental assumptions.
Least squares introduces several problems:
1) Outliers will skew the values, and there is no computable way to detect or deal with outliers (source [wikipedia.org]).
2) There is no computable way to determine whether the data represent a line or a curve - it's done by "eye" and justified with statistical tests.
3) The resultant function frequently looks "off" to the human eye, humans can frequently draw better matching curves; meaning: curves which better predict future data points.
4) There is no way to measure the predictive value of the results. Linear regression will always return the best line to fit the data, even when the data is random.
The right way is to show how much the observation data is compressed. If the regression function plus data (represented as offsets from the function) take fewer bits than the data alone, then you can say that the conclusions are valid. Further, you can tell how relevant the conclusions are, and rank and sort different conclusions (linear, curved) by their compression factor and choose the best one.
Scientific studies should have a threshold of "compresses data by N bits", rather than "1-in-20 of all studies are due to random chance".
Re: (Score:2)
1) Outliers will skew the values, and there is no computable way to detect or deal with outliers (source [wikipedia.org])
Do outliers skew the results? If the outliers are biased, then that may tell us something about the underlying population. If they aren't biased, then their effects cancel.
4) There is no way to measure the predictive value of the results. Linear regression will always return the best line to fit the data, even when the data is random.
But random data would generate statistically insignific
Re: (Score:2)
Do outliers skew the results? If the outliers are biased, then that may tell us something about the underlying population. If they aren't biased, then their effects cancel.
There's no algorithm that will identify the outliers in this example [dropbox.com].
But random data would generate statistically insignificant correlation coefficients. Also, the 95% confidence intervals used to predict values are wider for random data.
What value of correlation coefficient distinguishes pattern data from random data in this image [wikimedia.org]?
Re: (Score:2)
There's no algorithm that will identify the outliers in this example [dropbox.com].
So there's no algorithm for comparing observed values to modeled (predicted) values? The absolute value of the difference between the two can't be calculated? Hmm. . .
What value of correlation coefficient distinguishes pattern data from random data in this image [wikimedia.org]?
Are the data in that image random? Also, the data without the four points at the bottom would have a higher correlation coefficient.
Re: (Score:2)
Also, you may want to account for the difference between the x coordinate of the point and the average of the xs, as having an x coordinate far from the mean contributes to being farther away from the regression line.
Re: (Score:2)
Oops, I missed the second image. But the correlation coefficients are there. The sets of data that more closely approximate a line have such values close to 1 or -1. The ones that don't have values close to 0.
Re: (Score:2)
"If they aren't biased [outliers], then their effects cancel."
Oh, god no. The book I teach out of says that if outliers exist, it's required to do the regression both with and without the outliers and compare. Frequently there will be a big difference. (Weiss, Introductory Statistics, Sec. 14.2)
Re: (Score:2)
Outliers are often so extreme and rare that despite being statistically unbiased, they nevertheless severely skew statistics which aren't robust to them.
If outliers are unbiased, they can affect the results, but how can they skew the results? Also, if they're rare, how much effect can they have?
A set of random data with a significant correlation coefficient is indistinguishable from a genuine correlation.
Not on a scatterplot. It's pretty clear how close the data are to the line. Also, how probable is it tha
Re: (Score:2)
Variance of error is not what we want (Score:2)
Actually, there is a really good reason to use least-squares regression. A model that minimizes squared error is guaranteed to minimize the variance of error, obviously.
This is the wrong place for an argument (you want room 12-A [youtube.com]) and I don't want to get into a contest, but for illustration here is the problem with this explanation.
A rule learned from experience should minimize the error, not the variance of error.
It's a valid conclusion from the mathematics, but based on a faulty assumption.
Re: (Score:2)
That's brilliant - thanks! (Score:2)
The problem I have with least squares is that I don't like the definition of the "error". If you have two things that are correlated, one isn't necessariy a function of the other that includes some variability. If you flip the X and Y axes over - plot height against weight, rather than weight against height - then the least squares regression gives a different line. If the two errors are both minimised, but different, then neither of them is the "real" error.
Wow - brilliant insight! Thanks for that - things like this are why I come to Slashdot.
Re: (Score:2)
I replied to your post rather than the parent, where it was perhaps more relevant, as it was you I wis
Hey - pure mathematician! (Score:2)
Can I discuss some ideas with you offline? thon dot 9 dot okianwarrior at spamgourmet dot com
Re: (Score:2)
Thanks (Score:2)
It seems like you found a lot of problems with what is taught in a stat 101 class. This is good; there are many. However, there are also solutions to these problems which you would find if you took a higher level course.
That brought a smile to my face. Thanks.
Re: (Score:2)
It clearly can't be based on floating-point binary representations. These are not scale-free. What I mean is, under one scale, you might have an error of 0.5 and you might say that needs 2 bits to store. Under another scale the same error would be 0.3333333 which takes infinite bits to store, under the same storage scheme. Even if you solve that, you'd get that a small but complex error (e.g. 1/pi^100) is worse than a huge but simple one (e.g. 2^64). That's just ridiculous.
So we have to use a metric where larger errors give a larger metric. A monotonic function, at least. So let's choose log2(error), right?
Ah, damn! If only you weren't anonymous, I'd love to continue this conversation offline!
That level of insight! I didn't expect anyone to be familiar enough with the foundations of statistics to make this observation.
You were correct up to the point where you chose log2(error) as a metric. There's a function which can be deduced from first principles, and which solves all the above-mentioned problems. Can you find it?
Back-edit the statistical methods and use the new error function instead of least squares. T
An example (Score:3)
Having quickly skimmed the paper, I'll give an example of the problem. .54 .65 .74 .83 .88 .96 .94 .98
I couldn't quickly find a real data set that was easy to interpret, so I'm going to make up some data.
Chance to die before reaching this age
Age woman man
80
85
90
95
We have a person who is 90 years old. Taking the null hypothesis to be that this person is a man, we can reject the hypothesis that this is a man with greater than 95 percent confidence (p=0.04). However, if we do a Bayesian analysis assuming prior probabilities of 50 percent for the person being a man or a woman, we find that there is a 25 percent chance that the person is a man after all (as women are 3 times more likely to reach age 90 than men are.)
(Having 11 percent signs in my post seems to have given /. indigestion so I've had to edit them out.)
Well, duh. (Score:2)
Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false (PDF).
.
Found? Was he unaware that using a threshold of 0.05 means a 20% probability that a finding is a chance result - by definition ?
More interesting, IMO, is that statistical doesn't tell you what the scale of an effect is. There can be a trivial difference between A and B even if the difference is statistically significant. People publish it anyway.
Re: (Score:2)
There can be a trivial difference between A and B even if the difference is statistically significant. People publish it anyway.
This is especially prevalent in medicine (especially drug advertising). If you look at medical journals, they are replete with ads touting that Drug A is 'statistically better' than Drug B. Even looking at the 'best case' data (the pretty graph in the advert) you quickly see that the lines very nearly converge. Statistically significant. Clinically insignificant.
Lies, Damned Lies and Statistics
Re: (Score:2)
A P-value [wikipedia.org] of 0.05 means by definition that there is a 0.05, or 5%, or 1 in 20, probability that the result could be obtained by chance even though there's no actual relationship.
Re: (Score:2)
Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false (PDF).
. Found? Was he unaware that using a threshold of 0.05 means a 20% probability that a finding is a chance result - by definition ?
More interesting, IMO, is that statistical doesn't tell you what the scale of an effect is. There can be a trivial difference between A and B even if the difference is statistically significant. People publish it anyway.
Ofcourse it was found. The 20% are not by definition but a function of the percentage of studies done based on correct/incorrect H1. You could have 0% if you only did studies on correct H1s.
Re: (Score:2)
Re: (Score:2)
Found? Was he unaware that using a threshold of 0.05 means a 20% probability that a finding is a chance result - by definition ?
1 in 20 != 20%.
If p=0.2, then you'd be right.
Integrating to x sigma (Score:2)
Re: (Score:2)
A surprising number of senior scientists are not aware of the problems introduced by ending an experiment based on achieving a certain significance level.
I think the vast majority are aware of at least some of the problems. The issue is the ones who are willing to publish their results without addressing those problems honestly in the writeup.
not the real problem (Score:4, Insightful)
The bigger problem (Score:2)
The bigger problem is the habit of confusing correlation with cause.
Let's get something straight you non-staticians (Score:4, Insightful)
This is a geek website, not a "research" website so stop talking a bunch of crap about a bunch of crap. I'm providing silly examples so don't focus upon them. Most researchers suck at stats and my attempt at explaining should either help out or show that I don't know what I'm talking about. Take your pick.
"p=.05" is a stat that reflects the likelihood of rejecting a true null hypothesis. So, lets say that my hypothesis is that "all cats like dogs" and my null hypothesis is "not all cats like dogs." If I collect a whole bunch of imaginary data, run it through a program like SPSS, and the results turn out that my hypothesis is correct then I have a .05 percent chance that the software is wrong. In that particular imaginary case, I would have committed a Type I Error. This error has a minimal impact because the only bad thing that would happen is some dogs get clawed on the nose and a few cats get eaten.
Now, on a typical experiment, we also have to establish beta which is the likelihood of committing a type II error, that is, accepting a false null hypothesis. So let's say that my hypothesis is that "Sex when desired makes men happy" and my null hypothesis is "Sex only when women want it makes men happy." It's not a bad thing if #1 is accepted but the type II error will make many men unhappy.
Now, this is a give and take relationship. Every time that we make p smaller (.005, .0005, .00005, etc.) for "accuracy," then the risk of committing a type II error increases. A type II error when determining what games 15 year olds like to play doesn't really matter if we are wrong but if we start talking about drugs and false positives then the increased risk of a type II error really can make things ugly.
Next, there are guideline for determining a how many participants are needed for lower p (alpha) values. Social sciences (hold back your Sheldon jokes) that do studies on students might need lets say 35 subjects/people per treatment group at p=.05 whereas with a .005 might need 200 or 300 per treatment group. I don't have a stats book in front of me but .0005 could be in the thousands. Every adjustment impacts a different item in a negative fashion. You can have your Death Star or you can have Luke Skywalker. Can't have 'em both.
Finally, there is a statistical concept of power, that is, there are stats for measuring the impact of a treatment. Basically, how much of the variance between the group A and group B can be assigned to the experimental treatment. This takes precedence in many peoples minds over simply determining if we have a correct or incorrect hypothesis. Assigning p does not answer this.
Anyways, I'm going to go have another beer. Discard this article and move onto greener pastures.
And has been so for, oh, 50 years? (Score:2)
Back when I was a wee researchling, this [berkeley.edu] is literally one of the first paper I was told to read and internalise (published 20 years ago, and not even particularly breakthrough at the time).
There is absolutely no need for new evidence or further discussion of the limitations of statisti
Nice method he's developed there... (Score:2)
... but is it reproducible? :p
Re:Or you know.. (Score:5, Informative)
This would have the same problems, maybe even worse. The problem with statistics is usually that the model is wrong, and Bayesian stats offers two chances to fuck that up: in the prior, and in the generative model (=likelihood). Bayesian statistics still requires models (yes, you can do non-parametric Bayes, but you can do non-parametric frequentist stats also).
Contrary to the hype and buzzwords, Bayesian statistics is not some magical solution. It is incredibly useful when done right, of course.
Re:Or you know.. (Score:5, Insightful)
Re:Or you know.. (Score:5, Interesting)
Yes, I agree. If a p-value of 0.05 actually "means" 0.20 when evaluated, then any sane frequentist will tell you that things are fucked, since the limiting probability does not match the nominal probability (this is the definition of frequentism).
The power of Bayesian stats is largely in being able to easily represent hierarchical models, which are very powerful for modeling dependence in the data through latent variables. But it's not the Bayesianism per se that fixes things, it's the breadth of models it allows. A mediocre modeler using Bayesian statistics will still create mediocre models, and if they use a bad prior, then things will be worse than they would be for a frequentist.
Consider that if Bayesian statisticians are doing a better job than frequentists at the moment, it may be because Bayesian stats hasn't yet been drilled into the minds of the mediocre, as frequentist stats has been for decades. People doing Bayesian stats tend to be better modelers to begin with.
Re: (Score:2)
P(H0|significant) != P(significant|H0)
Re:Or you know.. (Score:4, Informative)
First of all, recommending that hypothesis tests be conducted with smaller tolerances for Type I error almost invariably imply a large decrease in power. There is no free lunch. There are many experimental designs for which the importance of making a positive inference (i.e., accepting the alternative hypothesis) is so great that you do need to set the alpha level very small. But if the test is to have any power, that means the data you must gather must be much, much more extensive. So, to simply say "alpha = 0.05 is too large because it admits too many irreproducible claims by random chance" sort of misses the basic point. A test conducted at such a level still has at most a 1 in 20 chance of observing a test statistic that would reject the null hypothesis even if the null is true. So a p-value of 0.04, for example, would merit further investigation. That's not so much a flaw of the frequentist methods as it is a flaw in interpretation, due to the natural tendency of investigators or clinicians to want a straightforward "yes/no" answer.
Bayesian methods, then, don't really offer intrinsically more meaning than frequentist methods. The main difference is that Bayesian methods, by their construction, force the investigator to draw an inference that is not characterized by a "yes/no" answer--in fact, it becomes a bit of a contrivance (e.g., Bayes factors and the calculation of cumulative posterior distributions) to try to interpret Bayesian analyses in this way. Don't get me wrong, that is an appealing and advantageous characteristic, whereas more care is needed to interpret the frequentist approach. But Bayesian methods also suffer from their own problems, many of which arise from the necessity of imposing some kind of prior distribution (so, for instance, Bayes factors are not monotone in the hypothesis).
The takeaway here is that in statistics, there is no magic bullet, no single approach that is supposed to be the "best" or "optimal" for inferential purposes. It is the role of scientists and investigators to perform the necessary follow-up analyses and meta-analyses to improve the credibility of a claim. So in a sense, the state of statistical methods in scientific research is NOT broken. It is working as intended, where people find enough evidence to stimulate further investigation, and it is through this process that previous claims are tested further. The only part that concerns me is how policymakers lacking in sufficient statistical background might put too much credence in a particular analysis--this idea that "oh, we found significance so this MUST be true"--or how the non-statistically informed public or media all too often distort the meaning of an analysis to the point of absurdity. But I argue that this is not a weakness of statistics. It is a deficiency in understanding brought about the human desire to act upon perceived certainties in a fundamentally uncertain world.
Re:Or you know.. (Score:5, Insightful)
The problem with frequentist statistics as used in the article is that its "recipe" character often results in people using statistics that do not understand its limitations (a good example is assuming a normal distribution when there is none). The bayesian approach does not suffer from this problem, also because it forces you to think a little bit more about the problem you are trying to solve compared to the frequentist approach.
If only. The number of people who think "sprinkle a little Bayes on it" is the solution to everything is frighteningly large, and growing exponentially AFAICT. There's now a Bayesian recipe counterpart to just about every non-Bayesian recipe, and the only difference between them, as a practical matter, is that the people using the former think they're doing something special and better. One might say that their prior is on the order of P(correct|Bayes) = 1, which makes it very hard to convince them otherwise ...
Re: (Score:2)
And don't mention that you had to try 30 different tests to finally find one that gives a "significant" result.
you sound like you know what you're talking about (Score:2)
It sounds like you have a clue about statistics. Do you know of a good forum to ask a fairly involved statistics question? I have a set of measured variables A-E which all tend to indicate the likelihood of X. The relationships are a bit complex and unknown, though, so I need help with how I should analyze the historical data in order to come up with parameters to use in the future for making "predictions" of X based on known values of A-E.
Re:That book about the bell curve (Score:4, Informative)
That is because of the central limit theorem, (http://en.wikipedia.org/wiki/Central_limit_theorem), which indicated that for a large number of independent samples, it doesn't matter what the original distribution was, and we certainly can reliably use the normal distribution. It is NOT unfounded.
Re: (Score:3, Insightful)
Unless of course we happen to be working in a chaotic system where strange attractors mean there can be no centrality to the data.
Chaos theory is a lot younger than the central limit theorem. The situation might be similar to the way Einstein's theory of relativity has moved Newton's three laws from a position of central importance in all physics to something that works well enough in a small subset. A subset that is extremely important in our daily life, but still a subset.
Some portions of a chaotic syst
Re: (Score:2)
Chaos also occurs in the dynamic evolution of a system, so it's hard to see the connection you're implying with statistics.
When I turn that around, it seems to say that statistics is only of value in systems that have fully matured. Which sounds like most of the time statistics have no value.
Is that correct? Or is there some other way to reverse the quotation?
Re: (Score:2)
you're spouting a lot of pseudo-science here
I agree that there IS a lot pseudo-science here, and that I have fallen into a nasty trap.
What can I say? This is not the first time an AC troll has gotten me good, and it probably will not be the last.
Now get thee back under that dark, damp, cobwebby bridge where thou belongest! Or I shall sprinkle thee with Troll-B-Gone powder and there will be nothing left around here but some grins and giggles.
Yes and no (Score:5, Interesting)
So it gives you a very valid excuse to assume that the value distribution of some quantity occurring in nature will follow a Normal distribution when you know nothing else about it.
But there's the crux: it remains an assumption; a hypothesis, and fortunately it's usually a *testable* hypothesis. It's the responsibility of a researcher to check if it holds, and to see how problematic it is when it doesn't.
If something has a normal distribution, its square or its square root (or another power) doesn't have a Normal distribution. Take for example the diameter, surface area, and volume of berries. The diameter (goes with the radius, r), the surface area (goes with r^2), and the volume of berries (goes with r^3). They cannot all be Normally distributed at the same time, so assuming any of them is starts you out on shaky foundation.
Re: (Score:2)
Maybe none of them is normally distributed, but if we take distributions of sample means of 50 berries, then those distributions might all be close to the normal distribution.
Yes, and? (Score:2)
What you're talking about is the distribution of the sample means of r1, r2, r3 respectively. Those are asymptotically normally distributed, but that's not what we're talking about here.
What we were talking about is whether: r1, r2, and r3 can all be normally distributed. The reason being that people investigating the size, weight, and surface area of berries may *assume* (appealing to the Central Limit Theorem) th
Re: (Score:2)
What we were talking about is whether: r1, r2, and r3 can all be normally distributed. The reason being that people investigating the size, weight, and surface area of berries may *assume* (appealing to the Central Limit Theorem) that the quantity they're investigating can be modeled adequately through a normal distribution,
But the Central Limit Theorem is a claim about the distribution of sample means as the sample size gets larger.
Re: No (Score:2)
Re: (Score:2)
As you say, there is the Central Limit Theorem (a whole bunch of them actually) that says that the Normal distribution is the asymptotic limit that describes unbelievably many averaging processes.
So it gives you a very valid excuse to assume that the value distribution of some quantity occurring in nature will follow a Normal distribution when you know nothing else about it.
If your sample distribution is non-normal and you're using tests that assume normality then you're fucked regardless of the central limit theorem. Anyway, the central limit theorem tells you that the means of repeated samples will be normally distributed, but this isn't usually what you're applying your test to. You're usually applying your test to a single sample and that may well be very non-normal (which is the point of the central limit theorem).
Re: (Score:2)
"You're usually applying your test to a single sample and that may well be very non-normal."
No, usually you're applying your test to the average of a single sample, and since possible averages are always normally distributed (for a fair sample size), you can indeed use that to assess the likelihood that your one sample average is usefully close to the population average.
Re: (Score:2)
No, usually you're applying your test to the average of a single sample, and since possible averages are always normally distributed (for a fair sample size), you can indeed use that to assess the likelihood that your one sample average is usefully close to the population average.
But only if your sample is not biased because you determine the average based upon your sample. If the sample is badly distributed or biased in some way then your estimate of the population mean will be similarly biased and you will not be able to make meaningful inferences regarding the population mean. Problems are worse with small samples.
The same thing happens with parametric stats tests. e.g. with a t-test, the p-value it returns is only meaningful if the sample meets the assumptions of the test. If
Re: (Score:2)
"But only if your sample is not biased because you determine the average based upon your sample. If the sample is badly distributed or biased in some way then your estimate of the population mean will be similarly biased and you will not be able to make meaningful inferences regarding the population mean. Problems are worse with small samples."
You are almost terrifyingly confused; what you wrote above is nonsense. What I wrote earlier, "the likelihood that your one sample average is usefully close to the po
Re: (Score:2)
The diameter (goes with the radius, r), the surface area (goes with r^2), and the volume of berries (goes with r^3). They cannot all be Normally distributed at the same time, so assuming any of them is starts you out on shaky foundation.
So what? People are only trying to show normalised distribution over one thing. No one is trying to tell you what the best berry is. You can obviously do normalised distribution over each one of those, the radius, area, and volume. I don't see the problem with that.
Re: (Score:3)
Argggh, you guys are all missing the point that the Central Limit Theorem is about the sampling distribution of the sample mean, i.e., the sample space for possible averages that you get as a result of your sampling process. (Or a proportion, equivalent to an average on booleans 0 or 1.) So you can always use this knowledge, for a fair sample size, to assess how likely it is that your sample mean is usefully close to the population mean.
What are some things that definitely have an approximately normal dis
Re: (Score:3)
The CLT is one of the most elegant and powerful results in all of mathematics, and can be used, quite appropriately, to justify normal models for all sorts of measurements. That being said, its usefulness has led to the dumbed-down idea of "the bell curve" being the appropriate model for all sorts of things where it's clearly not--I don't know how many times I've seen a normal curve superimposed on a histogram or kernel density estimation of data that are clearly non-normal. As another poster pointed out
Re: (Score:2)
That is because of the central limit theorem, (http://en.wikipedia.org/wiki/Central_limit_theorem), which indicated that for a large number of independent samples, it doesn't matter what the original distribution was, and we certainly can reliably use the normal distribution. It is NOT unfounded.
Emphasis is mine.
Actually, you are misstating the CLT, which does not work at all for distributions without finite mean or variance (which may well be the case for real-world experiments). And even if the variable we are measuring does have finite mean and variance, the speed of convergence is only possible to quantify in certain cases. So the shape you get from samples of size 1000 may look good to you because you are impressed by a bunch of zeroes, and may even work OK near the estimated mean, but when
Re: (Score:2, Informative)
Statistics does not, by any means, make that assumption. If it did, the entire field of statistics would have been completed by 1810.
Mediocre (actually, sub-mediocre) practitioners of statistics make that assumption.
It is true that many estimators tend to a normal distribution as the sample size gets large, but this is not the same as assuming that the data itself comes from the normal distribution.
Re: (Score:3)
No, statisticians certainly do not assume that. If everything in my field were normally distributed then my life would be a lot easier, but it's not, and we're aware that it's not.
Re: (Score:2)
The bad news is that it is getting harder and harder to sort the science reported in journals from the papers whose purpose is to generate or preserve revenue streams for the researchers (or the corporations for which they are agents).
Re: (Score:2)
Climate models are currently, at best, when treated as an ensemble (if you buy that as legitimate), skirting along the p 0.05 level of significance in the validation period.
Pointing this out is considered trolling -- it probably offends some religious sensibilities.
Tightening the threshold as the article suggests would mean the model results are not "significant" (i.e., not reasonably distinguishable from natural variation -- note that I am not a "denier" and that I do accept that CO2 is a greenhouse gas e