Scientists Propose To Raise the Standards For Statistical Significance In Research Studies (sciencemag.org) 137
sciencehabit shares a report from Science Magazine: A megateam of reproducibility-minded scientists is renewing a controversial proposal to raise the standard for statistical significance in research studies. They want researchers to dump the long-standing use of a probability value (p-value) of less than 0.05 as the gold standard for significant results, and replace it with the much stiffer p-value threshold of 0.005. Backers of the change, which has been floated before, say it could dramatically reduce the reporting of false-positive results -- studies that claim to find an effect when there is none -- and so make more studies reproducible. And they note that researchers in some fields, including genome analysis, have already made a similar switch with beneficial results.
"If we're going to be in a world where the research community expects some strict cutoff ... it's better that that threshold be .005 than .05. That's an improvement over the status quo," says behavioral economist Daniel Benjamin of the University of Southern California in Los Angeles, first author on the new paper, which was posted 22 July as a preprint article on PsyArXiv and is slated for an upcoming issue of Nature Human Behavior. "It seemed like this was something that was doable and easy, and had worked in other fields."
"If we're going to be in a world where the research community expects some strict cutoff ... it's better that that threshold be .005 than .05. That's an improvement over the status quo," says behavioral economist Daniel Benjamin of the University of Southern California in Los Angeles, first author on the new paper, which was posted 22 July as a preprint article on PsyArXiv and is slated for an upcoming issue of Nature Human Behavior. "It seemed like this was something that was doable and easy, and had worked in other fields."
Re:But only 56% of scientists agree with this (Score:5, Insightful)
P0.05 is used everywhere because that simply won't happen. Scientists who aren't statisticians care passionately about only their topic and it isn't statistics. If anyone tries to use something else, everyone including reviewers will demand they use what everyone else uses anyway.
Re: (Score:3)
And then people wonder why the credibility of published science has been called into question so much recently. :-(
No, we don't wonder, it's because of a lack of reproducibility. Well, that and political agendas. The problem is understood and agreed upon generally. Agreeing on a solution is where things go off the rails. Saying "This magic number is no good, we should use THIS MAGIC NUMBER" is what I have a problem with.
Re: (Score:3)
No, we don't wonder, it's because of a lack of reproducibility.
Well, of course, but part of that is because people unwisely equate statistical significance with a hypothesis being true or false. Suppose you run an experiment twice and you get similar raw data in both cases, but in one case your result is marginally significant at whatever level and in the other it falls just short. The situation didn't fundamentally change between the two experiments, but a regrettable number of people who don't understand that the whole statistical analysis is built around probabiliti
Shining some light (Score:5, Informative)
I'm a biologist, I don't understand P values [...]
Here's some light for that subject.
Suppose you make 20 measurements of rats in a maze and discover that 15 out of the 20 times they turn left on their first corridor junction. Is that significant?
We know that if the decisions were random we'd expect 10 out of 20, but we also know that there is variation in that number. 10 out of 20 is the highest probability of individual outcome, but it's even *more* probable that something other than 10 out of 20 will occur.
So to see if the 15 out of 20 is significant, we can compare this outcome to random chance.
We can simulate 20 coin flips in a computer and then write down the number of heads versus tails. Then we do it again and write down the new results, and then do it again and again for a million rounds.
Tallying the results, we can then find the *probability* that 20 random coin tosses will equal 15 or more heads, and this will give us a way to compare the rat data with random chance. What percent of random tosses yield 15 or more heads?
This is the P-value in a nutshell: it's the probability that your measurements could be the result of chance.
Note that we can never be *certain* that the results are significant, only that there is a *probability* that the results are significant. The probability of significance is chosen by convention depending on the outcome risks. For normal scientific studies, it's 5% (P < 0.05). If you're studying a new medicine, you might want to bump that up to 1% (P < 0.01) for safety. If you're exploring subatomic physics, and the experiments are very difficult to reproduce, you might want that to be P < .00001% to be relatively certain.
The conventional value of 5% is often incorrectly attributed to Pearson. He said the 5% value makes the results worthy of more study, not that 5% value makes the results significant.
Also of note, if everyone makes studies to P 5%, then on average 1 out of 20 studies *will* be due to random chance, which means that fully 5% of all scientific studies are reporting random events.
And of course, if your degree requires you to publish, or your tenure is based on your publishing history, there are ways to adjust the results to make the significance more likely.
(For example, you can record 8 different measurements of your rats. There are 8*7 = 76 possible pairs of measurements, so on average about 3 of those pairs will correlate to within 5%. If you want to publish a paper, this is one way to do it.)
Very, very few recent scientific papers have ever been verified (by reproducing), and when later examined were found to be unreproducible.
This is leading people to lose faith in the scientific method.
Re: (Score:3, Funny)
Extra credit: The numbers 8 and 7 are multiplied together twenty separate times. In nineteen out of the twenty calculations the result is 56. However, one calculation gives a result of 76. What's the p-value for this trial?
Re: (Score:2)
Re: (Score:2)
What's the p-value for this trial?
Pentium?
Re: (Score:1)
This is the P-value in a nutshell: it's the probability that your measurements could be the result of chance.
This is why real scientists use Bayes' theorm and provide confidence intervals [xkcd.com].
Re: (Score:2)
[face palm] on mixing Bayes with confidence intervals.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
My understanding, gleaned from the p-value debate, and other things like it is, that the trouble is you can't expect there to be a magic formula that checks for Scientific Goodness.
I used to want to understand things like p-values better, now that I do, I've concluded there's no substitute for just eyeballing the scatter. If you're worried you're fooling yourself, you could do it double-blind: hire someone who doesn't eve
Re: (Score:1)
My old biometry profressor, a former graduate student of R.A.Fisher, gave this pearl of wisdom at the conclusion of the course's final lecture. It was was billed as the magic equation that should be heeded by all experimental biologists:
C^1B(4).
In other words, see a statistician before performing the experiment.
p-value in a simpler nutshell (Score:2)
Another way to put it into simpler term.
If you replicated this same study (rats in the labyrinth), then
If the result (see the 15 out of 20 rats) was only due to random chance, with a p-value of 0.05, such result would only occur in 5% of attempt to replicate the experiment, i.e.: only 1 of 20 attempts to replicate the rat-in-the-lab would give results as skewed as 15 rats out of 20 by pure luck.
For the proposed threshold p-value of 0.005, such results could happen by random chance only in 0.5% attempts to r
Re: (Score:2)
This is the P-value in a nutshell: it's the probability that your measurements could be the result of chance.
Quoted from http://fivethirtyeight.com/fea... [fivethirtyeight.com]
A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again.
Re: (Score:2)
The difference between "How likely is this result assuming the Null Hypothesis is true" and "This result is X% likely to be random chance" is very, very slight when it comes to simply deciding whether or not a study's results merit further inquiry. So yes it's a misconception, but it's a misconception that's so microscopically close to the actual truth that it's alright for the layman or average undergrad to rely on it.
The real problem is that we're using P-values all over the place for bullshit like the so
Re: (Score:1)
People aren't losing faith in the method, they have lost faith in those purporting to follow the method.
Re: (Score:2)
In your example, you are assuming the coin flips exactly evenly on both sides (50% heads, 50% tails). If, in reality, a coins flip 51% heads and 49% tails (due to, say, the extra weight of the head), then your p-value would not be accurate.
Re: (Score:2)
This is leading people to lose faith in the scientific method.
This is leading people to lose faith in scientists.
Also the constant pushing of political agendas.
Scientists have only themselves to blame. What makes people angry is that those same scientists go around acting like they're the oracles of Truth and intelligence just because they put on a stereotypical white lab coat once or published a paper.
Having your results reproduced several times by several other research groups should be the gold standard.
Using your results to produce something should be the go
Re: (Score:2)
Even if everyone is completely honest and knowledgeable, because of the publication bias, the significance is way off. Just keep doing experiments and eventually you will get a "significant" result. If we knew the number of experiments done, we could control for this problem, but in most fields this is impossible. Imagine a
Re: (Score:2)
While I appreciate an intuitive explanation, I don't think it's as easy as you suggest, especially for someone practicing in the field as opposed to someone doing homework.
So the null hypothesis is a 50/50 coin flip with IID Bernoulli trials. Let's say the result of the experiment is R L L R L L L L L L R R L L L R L L R L? What is the probability of this data given the null hypothesis is true? Well the probabil
Re: But only 56% of scientists agree with this (Score:1)
Get a copy of "Statistical Power Analysis for the behavioral Sciences" by Jacob Cohen.
He spent his career trying to understand how to reliably measure an "effect" on something. Understanding effect sizes (dimensionless) and the number of samples you need to reliably test for an effect of a particular size is what this book is all about.
Re: (Score:1)
Yes and no.
Yes, in principle you are right, p-values should ideally be tailored to the application. The higher the negative consequence of a false result, the lower p-value you need.
On the other hand and more importantly no - the main problem is that most researchers in various non-mathematical fields seem to have no clue how probabilities and statistics works, so they are not able to make that judgement call. Setting a higher standard should improve quality of results overall - yes it will lead to fewer re
Re: (Score:2)
Re: (Score:2)
Glad I checked first, I was going to link to another 538 article:
http://fivethirtyeight.com/fea... [fivethirtyeight.com]
Re: (Score:2)
Are you sure you're a biologist and not a gardener?
P-values are undergraduate level statistics (Score:2)
I'm a biologist, I don't understand P values, but I am aware that they shouldn't be the gold standard
It worries me when I read about people doing science who don't understand basic statistics. This is undergraduate level stuff and it's not terribly difficult to wrap your brain around. Anyone smart enough to be a professional biologist should be able to handle P-values without difficulty.
Scientists who aren't statisticians care passionately about only their topic and it isn't statistics.
Whether you are passionate about statistics or not is irrelevant. You are advocating mathematical illiteracy because some people aren't "passionate" about math? I'm not passionate about grammar but I recognize its impor
Re: (Score:2)
Ideally scientists in all the different fields would use the statistics that make the most sense for their specific study
I think astronomy is a great example of this. You can spend years trying to find similar enough observations to compare, and often it's just not close enough to draw high-P-value conclusions from. It doesn't mean the observations are worthless - they may well hint at some new concept. But it takes decades to really pin things down, because the universe is a very large place, and our technology still needs time to improve. The alternative would be to never publish astronomical research, because it doesn't me
Re: (Score:2)
Ideally scientists in all the different fields would use the statistics that make the most sense for their specific study
How about: Researchers should engage a staff statistician to look at their study, Develop a plan for what statistics to use, and sign-off that the statistical bar of significance used is the right one for this kind of experiment and data?
Either that or create a rubrik that uses a bunch of Yes/No questions and a few numerical ranges to decide how statistical significance shal
Re: (Score:1)
Re: (Score:1)
People who didn't public support hillary were ostracized. People who supported someone else were brutally beaten in the streets, some to the point of being hospitalized. How many people REALLY supported her?
Re: (Score:1)
Re: (Score:2)
Re: But only 56% of scientists agree with this (Score:1)
This. We need to ignore facts.
Six-sigma! (Score:4, Funny)
Re:Six-sigma! (Score:5, Insightful)
Make it Six Sigma
That would eliminate many false positives, as well as eliminating nearly all true positives. Of course, this will do nothing to reduce flawed studies caused by reasons other than statistics, such as non-representative sampling (e.g.: most mouse studies use only male mice), poor experiment design, shoddy data gathering, sponsorship bias, and outright fraud.
But, the cost of clinical studies would only increase by an order of magnitude, so what do we have to lose?
Re: (Score:2)
I bet they're white & middle class too.
Re: (Score:2)
Maybe they're not asking for directions in the maze.
Re:Must be tenured (Score:5, Insightful)
Nah. He just wants to eliminate 95% of the competition.
This won't fix anything (Score:5, Insightful)
There's a trade-off between sensitivity and specificity. If you increase the threshold for "significance", you reduce the power to discover a significant effect when it truly does exist.
And a major part of the problem with scientific studies is that they are already underpowered. According to conventional wisdom, ideally, scientists should strive for a power of about 80% (i.e., an 80% chance of detecting an effect if it truly exists), but very few studies actually achieve power of this level. In many fields, the power is less than 50% and sometimes much less.
Underpowered studies result in two major problems:
1) Most obviously, an underpowered study results in a greater number of FALSE NEGATIVES. You fail to find a true effect. You will either publish your incorrect result of no effect. (And why should we consider published false positives to be any worse than false negatives?) Alternatively, perhaps you don't publish your study because you couldn't reach significance. This exacerbates the "file-drawer effect" and also results in wasted research dollars because the results aren't published.
2) Somewhat counterintuitively, underpowered studies are often also more likely to result in FALSE POSITIVES. This is because, when your power to detect a true effect is low, and if you test a large number of effects that are unlikely to be null, most of the hypotheses that you say are "significantly" non-null will actually be false positives. We would say that the "false discovery rate" tends to be very high when the power is low.
Reducing the level of significance will do little to address these problems, and in some cases may even exacerbate the problem.
The key is *to move away from the binary concept of "significance" altogether*. It's obviously artificial to have an arbitrary numerical cutoff for "matters" vs. "doesn't matter", and this is not what Ronald Fisher intended when he popularized the p-value or developed the concept of "significance".
What we should be doing is measuring and reporting effect sizes along with their credible intervals. While using priors that are based on our real state of knowledge. In other words, we should be doing Bayesian statistics.
Re: (Score:2)
A large number of results that are likely to be null, I mean.
Re: (Score:2)
I think you answered your own question in (1) - Negative results rarely get published, and so false negatives rarely propagate beyond the researchers involved. False positives are more of a problem specifically because they are far more likely to misguide others.
Now, the fact that negative results rarely get published is a whole different problem...
As for (2) - I guess I just don't see how it's counterintuitive - isn't the entire problem with an underpowered study the fact that the real results are likely
Re: (Score:2)
Actually, I don't think that example undermines the probability of false negatives/positives themselves - just their impact on the results. Rather it's demonstrating the related statistical property...whose name I forget... that the accuracy of the test alone doesn't directly correspond to the likelihood that the results are accurate, you also have to consider the actual probability within the population.
e.g. - if a medical test for X is 99% accurate (for both positives and negatives), but only 1 in ten th
Re: (Score:2)
Bayesian statistics is all fine and good but prior selection is an art. Most biologists have trouble with the frequentist approach (beyond what some software tool produces automagically). Do you think they will suddenly be able to specify a proper prior? I would guess that 99% of papers will be Jeffrey's (uninformative priors). And the vast majority of the remainder will be junk, violating, oh I don't know, causality? Good luck with that proposal.
Re: (Score:2)
How does one control this in practice? Let's say I want to compare means. Do I need to assume the difference is greater than X to lower bound my power (modulo a bunch of assumptions.) I don't really see this in practice.
What this guy said, times 1000. (Score:4, Funny)
What this guy said, times 1000.
According to conventional wisdom, ideally, scientists should strive for a power of about 80000% (i.e., an 80000% chance of detecting an effect if it truly exists), but very few studies actually achieve power of this level. In many fields, the power is less than 50000% and sometimes much less.
What this guy said, times 1000. (Score:5, Funny)
What this guy said, times 1000.
I think we should standardize around "What this guy said, times 10,000" to make sure the effect is truly significant.
Re: (Score:2)
Is this just scientists arguing over math (Score:1)
Re: Is this just scientists arguing over math (Score:1)
I'm not sure what climate science has to do with this. It was published in a Psychology journal which is firmly in the realm of the social/biological sciences. Climate scientists have their own basket of statistical issues they have to deal with.
Not convinced this will help (Score:5, Interesting)
I'm not convinced this will help. There are a couple of issues here. Often, the experimental design can be changed, like how certain variables are controlled for, to get a p-value that's below the threshold. The other problem is that p-value is sensitive to the sample size. If you want a lower p-value, increase the sample size. In many cases, p-values aren't a good way to show whether a result is useful or not.
I'm a meteorologist and I research severe thunderstorms. Let's say that I want to test whether a particular variable is useful in discriminating between tornadic and non-tornadic supercells. One approach might be to calculate the mean of that variable for tornadic supercells and the mean in non-tornadic supercells. The null hypothesis is that the mean of the two samples are the same, and I calculate a p-value. if the sample size is large enough, that is I've included enough supercells, I can make even very small differences in the means appear statistically significant.
A better approach is to use that variable as a predictor and have two data sets -- a training data set and a testing data set. I then calculate a function to classify storms based on the training data set, using the variable as a predictor of whether a storm will be tornadic or not. Then I test its accuracy with the testing data set and the metric of success is the accuracy of the variable (hits, misses, and false alarms) of whether a storm will be tornadic or not. This is better because increasing the sample size isn't going to achieve a statistically significant result.
Normally, some kind of baseline is chosen, and you want to show that your method performs better than the baseline. Of course, the problem is that you have a lot of flexibility in how to choose this baseline, and reviewers still need to be careful in how they evaluate work. For example. let's say that I cite a paper saying that climatologically, 20% of supercells or tornadic. I could randomly guess whether a supercell is tornadic based on that 20% probability and use that as my baseline. If my work is useful, hopefully I outperform than random guessing based on climatology.
This isn't the best way, though, because we know of several variables that are useful in predicting whether supercells will be tornadic or not. A better baseline would be to include variables that are known to be useful and then test whether the additional variable adds skill or not. It also helps to have some physical explanation why a particular variable would affect whether a supercell is tornadic or not.
There are cases where p-values are useful, but it's also very easy to abuse them. There's no substitute for vigilant reviewers who can spot misuses of statistics. There's nothing magical about a p-value of 0.05 or 0.005. I have no problem with p-values being presented, but I think a better approach would be to require that papers include more than p-values to demonstrate that a result is significant. I've described one such approach above that I use in my own research.
Re: (Score:2)
Personally, I prefer the "successful predictions" measure of validity.
Re: (Score:2)
A truly scientific post on slashdot.
Re: (Score:2)
I couldn't agree more. The p-value is a very specific thing, and by itself isn't something that can really say something is valid or not. I've done my share of analysis for folks, and seen the results of others analysis to know that in many cases you can statistically prove just about anything you want to prove and in many cases there is an agenda that is more akin to we're trying to prove X where X is the desired outcome.
It is all about the data and the methodology, more than the statistical math. Cherry p
Re: (Score:2)
Small differences in the mean can be statistically significant. And yes you need large sample sizes to show this. Not sure what you mean by appear...
Re: (Score:1)
Averages do not comprise the whole of statistics.
Increased costs for drugs (Score:2)
This will mean that big pharma will have to run an order of magnitude more studies until they can find the one study which can be published because it shows a positive correlation.
[yes, I know statistics don't really work that way]
Re:Increased costs for drugs (Score:5, Interesting)
This will mean that big pharma will have to run an order of magnitude more studies until they can find the one study which can be published because it shows a positive correlation.
[yes, I know statistics don't really work that way]
Actually they kind of do!
A tactic that Pharma companies have pulled many times in the past is to try and kept generic drugs off the market by showing that they are not equivalent to the proprietary product. And they do this by running a couple of dozen of animal studies, with the animals being given the two different products, with various physiological parameters being monitored. When one of these parameters is found to differ between the two drugs by p > 0.05 they submit the result to the FDA declaring that the two drugs are not equivalent in their effects (the parameter of course has nothing to do with the drug's actual pharmacological effect).
Now with this standard they will have to run 200 or so tests to find one that exceeds p > 0.005.
Fisher's comment (Score:5, Informative)
If you want reprodicibilty, well then require reproducibility.
Fisher, the inventor of p-values said this about p-values:
>>”[We] thereby admit that no isolated experiment, however significant in itself, can suffix for the experimental demonstration of any natural phenomenon In order to assert that a natural phenomenon is experimentally demonstrable we need, not an isolated record, but a reliable method of procedure. In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result" (Fisher, 1960, p.13-14).
Let's raise the standard for online journalism (Score:2)
How many is a "mega-team"? A million, or one million, forty-eight thousand, five hundred and eighty six?
It appears to be seventy-two.
Re: (Score:2)
It appears to be seventy-two.
Is this a statistically valid sample size for P0.005? Have they repeated the study with different study teams to see if the results are repeatable?
Unlikely to change much (Score:3)
This would just provide a new target for the p-hackers [wikipedia.org].
Show us the proof! (Score:2)
"It seemed like this was something that was doable and easy, and had worked in other fields."
So where are the studies that "prove" this? Oh, and they'd better have a significance of 0.005 or better.
lowering the P-value won't help. (Score:5, Informative)
The problem with current research in semi-soft sciences like biology and medicine is that the scientists use this p-value wrong.
If you suspect a glass of wine a day will lower chances of heart disease, take 1000 volunteers, roll a dice and half of them you tell to not have that wine-a-day and the other half you tell, please drink one glass of wine a day. Next you wait two years, and evaluate the incidence of heart problems in the two groups. That's where 0.05 P-value is acceptable. (in practice, telling people ot suddenly stop or start drinking is not going to go well).
Things become problematic when you suspect: "something we can measure may be related to this disease" (e.g. Sarcoidosis), you take 200 patients and 200 healthy people and then measure 200 parameters in each of the 400 blood samples... Provided there is little to measure, you'll find about 1/20th or about 10 parameters that DO seem to be (p=0.05) different between the two groups.
In the case at hand one or two measurable parameters ARE, different in the patient-group. So you'll have a better than 95% chance of finding those. Of the 198 other parameters you'll find 1/20th of false positives, for a total of almost 12 publishable results.
Should you want to increase your chances of finding these publishable results, the sample size needs to be relatively small. The group of 200 patients and 200 healthy people might already be too big to get enough spurious results. Even if they don't do this consciously, the scientists will quickly be able ot optimize their sample size to find publishable results.
When I was a freshman in 1985, some guy asked me to help him put his research in the computer. He had formulated 50 or so questions and predicted boys would answer differently than girls. So he went into a classroom, interviewed 30 boys and girls and put his results in the computer. Of course the computer told him there were several significant differences between boys and girls. Some of them real (do you like to play with trains? Dolls?) some of them not (I don't remember the example).
The other example is more recent. A Dutch Doctor got her PhD with (among others) the described sarcoidosis research. But my run-ins with subject are very limited simply because I don't move in those circles. This is way more widespread than just the few examples that I encounter personally.
Then people try to "fix" this by proposing the wrong solutions.
The research: "can we find a parameter that allows us to differentiate between the two groups" is very important as well. But you have to do your research in the right way. Take 100 patients and 100 healthy people and find the parameters that seem to make a difference. NOW you go into the second half of the research with a hypothesis: "this parameter is important" and verify your claim. Now the p=0.05 is acceptable. (a 5% chance that you're wrong, as opposed to a 95% chance your'e full of shit).
The current problem with the degree factory (Score:5, Insightful)
There are a few tensions here that I think may be causing this: (a) publish or perish - if it looks reasonable enough, publish because that's where your next job comes from, (b) poor statistical training - can be from both the authors and reviewers side, (c) unwillingness to fund or publish work that is reproducing previous results - there is a publisher created publication bias, (d) the general high cost of patient centred biomedical research, so meaning your have low sample numbers generally, (e) the unwillingness in some disciplines to get formal statistical input.
What are the potential solutions? If there was an unrestricted money pool you can recruit adequately (n>10000) to each study, but the money is not there, and there are some very rare diseases around. Better statistical training would be ideal, and there has been a push towards Bayesian analysis: I would think that as in most statistical tools someone will eventually find a way to inappropriately use them. Self-publish as an option - could be possible: I've seen some horrifically bad peer reviewed articles (& predatory journals! [wikipedia.org]) but there is an ethical tension between publishing without review which could just flood the literature with absolute garbage which is difficult to sort through, and actual proper peer review. Maybe something like Arxiv [arxiv.org] for biomedical science, although there would be a lot of resistance to it I suspect.
I don't hold too many hopes for a quick solution to this as there are a lot of vested interests, and people using the best new fangled statistical methods they've learned. I've even reviewed a paper recently, with multiple authors from a big university, where I just shook my head at the amount of statistical fudging that took place: the authors had imputed [wikipedia.org] about 80% of their primary predictor variable for an outcome, and then came up with a conclusion based on the imputed data. I just shook my head that this was actually allowed nowadays. While this article is good, some of the authors have been banging on about it for some time without much change.
policing good intentions (Score:3)
Most of the comments I skimmed are missing the point.
The real problem is that even scientists with the best training and the best intentions wind up committing a certain amount of p-hacking subconsciously. Just a simple data exploration to decide post hoc whether any collected data is corrupted or implausible, and you've already slithered one toe across the p-hacking line.
When p-values gate publication, and publication gates promotion, you create a severe moral hazard where many of the scientists you end up promoting lie on the bottom half of the curve in self-policing their accidental p-hacking. The guy with the penchant to do slightly more irregular experiments, which require slightly more data cleanup, seems to get slightly more published results. Ba da boom.
p=0.005 would put a pretty big crimp in this effect.
Of course it doesn't solve the larger problem. But good golly, first things first.
We also know from replication efforts that p=0.05 is allowing far too much crap to float over the gate. p=0.005 probably gets us closer the crap level we naively assumed we'd get when we originally rallied around p=0.05.
Probably the increased use of computers hasn't helped matters: even accidental p-hacking with pencil and paper is hard work.
Re: (Score:2)
very good points.
Significant Paper (Score:1)
Ionnidis gives formula that result is correct, PPV (Score:1)
The real answer... (Score:2)
... is not in changing the epsilon value of P
The real answer is in *requiring* 2 things:
Think about what 0.005 means (Score:2)
Re: (Score:2)
In the 50s there were places where going outside without a gas mask was dangerous. Beijing is a place dangerous to go outside without a gas mask. They were right.
Re: (Score:2)
Citizen. It appears that you are using an older version of the Newspeak Dictionary. Off to Room 101 with you.
Re: (Score:2)
You rarely have sample sizes large enough in the biological or social sciences to do anything at a "six sigma" level.
Crying about how unfair a standard is to your chosen methods isnt much of a persuasive argument. Some might argue that the standard should be inconvenient for the lowest common denominators Some might even argue that the standard should be inconvenient for everybody.
Re: (Score:2)
"Six sigma has nothing to do with science and research"
Apparently you know nothing about this, so let's educate you on this a bit.
Six-Sigma is used at CERN. So fucking YES, it is most certainly used in science.
Perhaps you should get a job in an actual scientific field.