Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Stats Science

Why P-values Cannot Tell You If a Hypothesis Is Correct 124

ananyo writes "P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume. Critically, they cannot tell you the odds that a hypothesis is correct. A feature in Nature looks at why, if a result looks too good to be true, it probably is, despite an impressive-seeming P value."
This discussion has been archived. No new comments can be posted.

Why P-values Cannot Tell You If a Hypothesis Is Correct

Comments Filter:
  • Oblig XKCD (Score:5, Informative)

    by c++0xFF ( 1758032 ) on Wednesday February 12, 2014 @04:55PM (#46232767)

    http://xkcd.com/882/ [xkcd.com]

    Even the example of p=0.01 from the article is subject to the same problem. That's why the LHC worked for something like 6 sigma before declaring the higgs boson to be discovered. Even then, there's always the chance, however remote, that statistics fooled them.

    • P-hacking. (Score:4, Insightful)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday February 12, 2014 @05:15PM (#46232981)

      From TFA:

      Perhaps the worst fallacy is the kind of self-deception for which psychologist Uri Simonsohn of the University of Pennsylvania and his colleagues have popularized the term P-hacking; it is also known as data-dredging, snooping, fishing, significance-chasing and double-dipping. "P-hacking," says Simonsohn, "is trying multiple things until you get the desired result" - even unconsciously.

    • Re:Oblig XKCD (Score:5, Informative)

      by xQx ( 5744 ) on Wednesday February 12, 2014 @06:20PM (#46233537)

      While I agree with the article's headline/conclusion - They aren't innocent of playing games themselves:

      Take their sentence: "meeting online nudged the divorce rate from 7.67% down to 5.96%, and barely budged happiness from 5.48 to 5.64 on a 7-point scale" ... Isn't that intentionally misleading? Sure, 0.16 points doesn't sound like much... but it's on a seven point scale. If we change that to a 3 point scale it's only 0.06 points! Amazingly small! ... but wait, if I change that to a 900,000 point scale, well, then that's a whole 20,571 points difference. HUGE NUMBERS!

      But I think they missed a really important point - SPSS (one of the very popular data analysis packages) offers you a huge range of correlation tests, and you are _supposed_ to choose to best match the data. Each has their own assumptions, and will only provide the correct 'p' value if the data matches those assumptions.

      For example, Many of the tests require that the data follow a bell-shaped curve, and you are supposed to first test your data to ensure that it is normally distributed before using any of the correlation tests that assume normally distributed data. If you don't, you risk over-stating the correlation.

      If you have data from a likert scale, you should treat it as ordinal (ranked) data, not numerical (ie. the difference between "Totally Disagree" and "somewhat disagree" should not be assumed to be the same as the difference between "somewhat disagree" and " totally agree") - however, if you aren't getting to the magic p0.5 treating it as ordinal data, you can usually get it over the line by treating it as numerical data and running a different correlation test.

      Lecturers are measured on how many papers they publish, most peer reviewers don't know the subtle differences between these tests, so as long as they see 'SPSS said p0.5' and they don't disagree with any of the content of your paper, yay, you get published.

      Finally, many of the tests have a minimum sample size that should ever be analysed. If you only have a study of 300 people, there's a whole range of popular correlation tests that you are not supposed to use. But you do, because SPSS makes it easy, because it gets better results, because you forgot what the minimum size was and can't be arsed looking it up (if it's a real problem the reviewers will point it out).

      (Evidence to support these statements can be found in the "Survey Researcher's SPSS Cookbook" by Mark Manning and Don Munro. Obviously, it doesn't go into how you can choose an incorrect test to 'hack the p value', to prove that I recommend you download a copy of SPSS and take a short-term position as a lecturer's assistant)

      • Lecturers are measured on how many papers they publish, most peer reviewers don't know the subtle differences between these tests

        Nobody really "knows the subtle difference between the tests" because nobody really knows what the actual distribution of the data is. In addition, the same way people shop for statistical tests, they shop for experimental procedures, samples, and all other aspects of an experiment, until they get the result they want.

    • by ceoyoyo ( 59147 )

      That's a different problem. Like this one, it's not a problem with p-values, it's a problem with people who don't know what a p-value is. The examples in the comic are NOT p-values for the experiment that was done. Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

      • by Rich0 ( 548339 )

        That's a different problem. Like this one, it's not a problem with p-values, it's a problem with people who don't know what a p-value is. The examples in the comic are NOT p-values for the experiment that was done. Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

        Agree completely, but the problem is that to an outside observer it is impossible to know how many comparisons were actually done.

        If you design an experiment to handle 20 comparisons and perform 20 comparisons you'll get meaningful data. However, that design will probably tell you that you need to collect 50x as many data points as you have money to pay for. So, instead you design an experiment that can handle one comparison, then you still perform 20 comparisons, and then you publish the one that showed

        • by ceoyoyo ( 59147 )

          It's also impossible to tell if the other guy made the whole thing up. Fraud is detected via replication. Generally though, it's pretty easy to detect people doing it inadvertently - they publish all their "p-values." I suppose if people actually get better at doing stats some of the inadvertent stuff will turn into harder to detect fraud.

          Clinical trials are actually required to be pre-registered with one of a few tracking agencies if they are to be accepted by the FDA and other similar agencies. There

          • by Rich0 ( 548339 )

            Clinical trials are actually required to be pre-registered with one of a few tracking agencies if they are to be accepted by the FDA and other similar agencies. There are a few problems, but it's much better than it used to be.

            My concern isn't with the trials that are pre-submitted, and then the results are submitted to the FDA. My concern is with the trials that are pre-submitted and then the results are never published.

            If you can do that, then there really is no benefit of pre-submission. Just pre-submit 100 trials, then take the 5 good ones and publish them.

            Granted, I don't know how many of those trials that don't get published actually pertain to drugs that get marketed. If a company abandons a drug entirely during R&D

            • by ceoyoyo ( 59147 )

              Human trials are incredibly expensive. You don't just do 100 and take the best one. Also, various people, including regulatory agencies, are wise to that. If you have a bunch of registered trials without results it's going to be looked at with suspicion.

              There have been cases of abuse, but pharma generally doesn't want to outright fake results. Developing drugs is expensive, but getting sued into oblivion is too.

        • Agree completely, but the problem is that to an outside observer it is impossible to know how many comparisons were actually done.

          And even if the researchers are being honest about the number of comparisons THEY did it is very likely that multiple people will be independently working on the same problem. If all of those people individually only publish positive results then you get the same problem.

      • Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

        You can't correct for multiple comparisons because you don't know about all the experiments other people have been doing; you'd have to to know that in order to do that correction properly. It's very hard even to count the number of comparisons you have been doing yourself, because it's not just the number of times you've run the test.

        • by ceoyoyo ( 59147 )

          You don't need to correct for experiments done on other datasets. If it's multiple experiments on the same dataset, whoever is in charge of that dataset should be keeping track. Even then, you only need to correct for multiple tests of the same, or similar, hypothesis. That's mostly a problem when your hypothesis is "something happened," which it should never be, but it is all too often.

          • by Sique ( 173459 )
            But lets say, you are a statistician working with hundreds of datasets and mining them for some interesting data. If you are looking for p 0.05, about every 20th attempt will yield something "significant", even if all datasets are white noise, and you don't use any dataset twice.
            • by ceoyoyo ( 59147 )

              Then you're not a statistician. There's a reason data mining is a dirty word in science.

              Before you start working you need to have a hypothesis like "woman are shorter on average than men." You then find or collect a dataset of height measurements from a sample of women and men and do a test on the means. That gives you a p-value, which is what you report. If you do it in two separate datasets, you get two p-values and you report both. You don't correct either one for multiple comparisons, and whoever i

              • by Sique ( 173459 )
                How do you tell that the hypothesis was first, and the result from the dataset came later, if you read the study?
          • You don't need to correct for experiments done on other datasets.

            Really? So you're saying that if I have a hypothesis that I'm desperate to prove, I can just keep trying one data set after another until I find one for which I get a statistically significant result and that p-value doesn't need to be corrected? Think that through. In fact, I can even test completely different hypotheses each time, only one hypothesis applied to only one data set ever, and things still fail in the same way.

            There is no correct

    • It's not a problem with being "fooled by statistics". If they applied the statistics wrong or made some other error, six sigma is no better than two sigma if there is something wrong with the underlying assumptions. (Not saying that there is anything wrong with the Higgs experiment.)

      The only real protection against this sort of thing is to have many different research groups repeat an experiment independently and analyze it many different ways.

  • And this is why (Score:4, Insightful)

    by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday February 12, 2014 @04:55PM (#46232771) Homepage Journal

    it takes more then 1 study.
    There is a push to have studies include Bayesian Probability.

    IMHO all papers should be read be statisticians just to be sure the calculation are correct.

    • IMHO all papers should be read be statisticians just to be sure the calculation are correct.

      Slashdot doesn't offer a too much confidence rating.

      Also your 'calculation' is wrong: should be 'calculations'.

    • I'm a big fan of Bayesian techniques. However, sgtatistical tests still have their place if they're not misused. The trouble is that they are deeply misunderstood and terribly badly used.

      All they do is tell you if a hypothesis is probably incorrect. You can use them to refute a hypothesis, but not support one.

      • Re:And this is why (Score:4, Informative)

        by ceoyoyo ( 59147 ) on Wednesday February 12, 2014 @08:39PM (#46234641)

        Other way around, and not quite true for a properly formulated hypothesis.

        Frequentist statistics involves making a statistical hypothesis, choosing a level of evidence that you find acceptable (usually alpha=0.05) and using that to accept or reject it. The statistical hypothesis is tied to your scientific hypothesis (or it should be). If the standard of evidence is met or exceeded, the results support the hypothesis. If not, they don't mean anything.

        HOWEVER, if you specify your hypothesis well, you include a minimum difference that you consider meaningful. You then calculate error bars for your result and, if they show that your measured value is less than the minimum you hypothesized, that's evidence supporting the negative (not the null) hypothesis: any difference is so small as to be meaningless.

        I am not a fan of everyone using Bayesian techniques. A Bayesian analysis of a single experiment that gives a normally distributed measurement (which is most of them) with a non-informative prior is generally equivalent to a frequentist analysis. Since scientists already have trouble doing simple frequentist tests correctly, they do not need to be doing needless Bayesian analyses.

        As for informative priors, I don't think they should ever be used in the report of a single experiment. Report the p-value and the Bayes factor, or the equivalent information needed to calculate them. Since an informative prior is inherently subjective, the reader should be left to make up his own mind what it is. Reporting the Bayes factor makes meta-analyses, where Bayesian stats SHOULD be mandatory, easier.

        • If the standard of evidence is met or exceeded, the results support the hypothesis. If not, they don't mean anything.

          No: I disagree slightly. If you get a bad P value, then it means that the data is unlikey to have come from that hypothesis. If the data is sufficiently unlikely given the hypothesis, then this is generally read as meaning that the hypothesis is unlikely. That's often applied to the null hypothesis, e.g there is no difference between X and Y, but some numbers computed show that it's unlikely

          • by ceoyoyo ( 59147 )

            "If you get a bad P value, then it means that the data is unlikey to have come from that hypothesis."

            Insignificant p-values don't meet anything beyond "my standard of evidence was not met." It's a common mistake. Suppose you get p=0.1. That's pretty universally considered non-significant. But what it actually means is that there's a 90% chance (assuming no prior information and no screwups) that the alternative hypothesis is true. That's a long way from "the data is unlikely to have come from that hypo

            • Insignificant p-values don't meet anything beyond "my standard of evidence was not met."

              Not at all. It's the probability that given the model, a statistic of that value or greater will be obtained with that many samples. If the p value is really small, that means that such a statistic would occur in 1 in N runs.

              Once the P value gets small enough, one can consider it sufficiently unlikely that the null hypothesis generated the data that you nolonger believe the null hypothesis to be credible.

              If the P-value i

    • No, all researchers should have be able to pass graduate level statistics courses.

      Yes, I realize that most of us would be back at flipping hamburgers or worse, end up going to law school. But to understand what you're doing, you really need to understand statistics.

      Einstein was basically a very good statistician.

      • No, all researchers should have be able to pass graduate level statistics courses.

        Yes, I realize that most of us would be back at flipping hamburgers or worse, end up going to law school. But to understand what you're doing, you really need to understand statistics.

        Einstein was basically a very good statistician.

        The null hypothesis is that the world would not be a better place if researchers had such advanced training in statistics.

        For example, there was no physical evidence favoring relativity for years after the theory was created. Qualification for research work should be intelligence not ability to do statistics.

    • by Rich0 ( 548339 )

      it takes more then 1 study.

      That only works if ALL the studies actually get published. If the labs only write up the studies with "interesting" results then there really is no difference between cherry-picking 1 trial out of 100 and cherry-picking 10 trials out of 1000.

  • by cold fjord ( 826450 ) on Wednesday February 12, 2014 @04:57PM (#46232789)

    There is no shortage of misleading statistics out there. It can be a discipline fraught with peril for the uninformed, and there are lots of statistics packages out there that reduce advanced tests to a "point and shoot" level of difficulty that produces results that may not mean what the user thinks they mean. I've read some articles showing no lack of problems in the social sciences, but the problem is bigger than that.

    I can't help wondering how much that plays into the oscillating recommendations that you see for various foods. Both coffee and eggs have gone through repeated cycles of, "it's bad," "no, it's good," "no, it's bad," "no, it's good." I understand that at least some of it is coming down to the aspect they choose to measure, but I can't help but wonder now much bad statistics is playing into it.

    • by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday February 12, 2014 @05:25PM (#46233041) Homepage Journal

      Not a lot.

      Eggs is a good example.
      They where 'bad' becasue they had high cholesterol.
      Science move on, and it turns out there are different kind of cholesterol, some 'good' some 'bad' so now eggs aren't as unhealthy as was thought.

      Same with many things.

      The media s the issue. It's can report science worth a damn.

      • by tlhIngan ( 30335 ) <slashdot&worf,net> on Wednesday February 12, 2014 @05:43PM (#46233231)

        Eggs is a good example.
        They where 'bad' becasue they had high cholesterol.
        Science move on, and it turns out there are different kind of cholesterol, some 'good' some 'bad' so now eggs aren't as unhealthy as was thought.

        Fats, too. It was deemed that fats were bad for you, so instead of butter, use margarine. Better yet, skip the fats period. Bad for you.

        Of course, it was also discovered that hydrogenation had a nasty habit of turning unsaturated fats into different chiral forms - "cis" and "trans". And guess what? The "trans" form of the fat is really, really, really bad for you (yes, that's the same "trans" in trans fats). Suddenly butter wasn't such an unreasonable option anymore as margarine as margarine had to undergo hydrogenation.

        Not to mention the effort to go "low fat" has had nasty side effects of its own - the overuse of sugar and salt to replace the taste that fats had, resulting in even worse health problems (obesity, heart disease) than just having the fat to begin with.

        (And no, banning trans fats doesn't mean they ban "yummy stuff" - there's plenty of fats you can cook with to still get the "yummy" without all the trans fats.)

        • Not to mention the effort to go "low fat" has had nasty side effects of its own - the overuse of sugar and salt to replace the taste that fats had, resulting in even worse health problems (obesity, heart disease) than just having the fat to begin with.

          To that you can add lower bioavailability of various nutrients that are fat soluble.

        • Butter good.
          Salt good.
          Sugar good.
          Meat good.
          Flour good.
          Sedentary bad.

      • Eggs is a good example. They where 'bad' becasue they had high cholesterol.

        That was even worse than bad statistics. There were no statistics because there was no data. Once it was figured out that high cholesterol levels were bad, somebody just assumed that the cholesterol content of the foods you ate had a significant on your cholesterol levels. They don't. A bad guess became gospel for years So much for scientific medicine.

        • So much for scientific medicine.

          You don't mean that we ... gulp ... know more about medicine now than we did 10 years ago? And that we will ... teeth chatter ... might know more in another 10 than we do today?! That's it, no more Western medicine for me ---it's sweat tents all the way.

          Seriously though, we make guesses based on current knowledge ... some turn out to be bad. Folks do some empirical work, stats show the guess was wrong, we move one. Or we use invalid stats, statisticians complain, we cl

          • by sjames ( 1099 )

            Evidence based medicine should never make a recommendation based on a guess. A guess should lead to a study which (once repeated) should lead to a recommendation.

            Remember the low salt craze? Turns out that it is very helpful for a small subset of high blood pressure sufferers and useless to the rest of us.

            That's why a bunch of people basically ignore medical/health recommendations entirely now. Too many flip-flops and most of them suggest we only eat food that tastes almost as good as the container it comes

            • Evidence based medicine should NEVER make a recommendation based on a guess. A guess should lead to a study which (once repeated) should lead to a recommendation.

              Don't be such an insufferably purist twonk. It's all guesses, your recommendation based on repeated studies (employing P-values no doubt) included. If it's evidenced based it's guesses with some sort of evidentiary basis.

              Remember the low salt craze?

              You mean the one based on numerous repeated studies? For which see:

              • Tuomilehto J, Jousilahti
              • by sjames ( 1099 )

                The smoking thing was based on actual research and observation, not a guess. The same is true of the cures for those once life threatening diseases. The big push to get everyone to give up salt was a wild guess extrapolation from those studies you pointed out (and it turns out, an unjustifiable extrapolation). Another of those unjustifiable extrapolations got millions to give up more or less harmless (in moderation) butter and consume very harmful transfats instead. Actual research and observation eventuall

                • The smoking thing was based on actual research and observation, not a guess.

                  That research was very controversial you know. Before we interfere with people's freedoms, shouldn't we be ironclad certain? I mean evidence based medicine should never make a recommendation based on a guess.

                  If actually following the scientific method is excessively purist, what do you suggest?

                  There's no such thing as "the scientific method." That went out with Francis Bacon. Actual science is simply not that purist, that's

                  • by sjames ( 1099 )

                    That research was very controversial you know.

                    No, it actually wasn't. You fell for the industry shill research that didn't hold up to peer review.

                    I'm going to top now since the rest of your post is so far reversed from fact that it feels like a subtle troll. Really, suggesting question the data rather than the theory was too far over the top. You gave yourself away before even getting to your suggestion that trepanation might have had valid science behind it.

                    • No, it actually wasn't. You fell for the industry shill research that didn't hold up to peer review.

                      Irony, my good man. Irony. (Would have though my re-quoting you made that obvious). And ironic too, in the other sense, that you should have taken everything else that was serious (and factual) in that post as "reversed from fact."

                      Really, suggesting question the data rather than the theory was too far over the top.

                      Now that was deadly serious. To quote British astrophysicist Arthur S. Eddington, to the

                    • Oh and btw, I didn't "suggest" to "questions the data rather than the theory" either. I suggested to question the data rather than theory (note the lack of a definite article).
    • Mostly the problem with nutritional studies is the impossibility of doing large scale, long term, controlled trials. See http://www.nytimes.com/2014/02... [nytimes.com]
    • Unfortunately, scientists studying nutrition face an ethical conundrum. They feel they must publish (and publicize) preliminary results because it might save lives. Suppose there's fairly good (but not extremely strong) reason to think that eggs are bad for you. Shouldn't you publicize that result? If you don't, millions of people could die needlessly. If you wait until the results are really certain (or at least more certain), then you have denied people the benefit of preliminary information.

      Bear in min

    • by ceoyoyo ( 59147 )

      The biggest problem with the way most people do statistics is that they don't have adequate statistical reasoning skills. The problem is in the design of experiments and analyses, before you ever get to punching the buttons in your stats package of choice. The differences you get from punching the wrong button are really very minor compared to things that happen all the time, like drawing conclusions based on tests you didn't do (the difference of differences error is an excellent example: half of all hig

  • by sideslash ( 1865434 ) on Wednesday February 12, 2014 @04:59PM (#46232797)
    The world is full of coincidental correlations waiting to be rationalized into causality relationships.
    • by ceoyoyo ( 59147 )

      There's no such thing. A correlation test always comes with a p-value that gives you an idea of how likely your observations are to be a coincidence rather than a correlation.

      • But the standard tests for correlation only supply the correct P-value if the data satisfy certain conditions, such as normally distributed errors, no errors in the independent variable, constant standard deviation of errors, and so on.

        • by ceoyoyo ( 59147 )

          Any statistical test requires that you apply it appropriately. If you don't do so the result is called a "mistake," not a "coincidence" OR a "correlation".

          • The result might be a mistake, but that doesn't mean it will be called a mistake. Also, how well do introductory statistics texts explain this?.

    • The world is also full of people who can't do math to save their life. Win-win.
  • Gold Standard? (Score:5, Interesting)

    by radtea ( 464814 ) on Wednesday February 12, 2014 @05:02PM (#46232819)

    That means "outmoded and archaic", right?

    I realize I have a p-value in my .sig line and have for a decade, but p-values were a mediocre way to communicate the plausibility of a claim even in 2003. They are still used simply because the scientific community--and even moreso the research communities in some areas of the social sciences--are incredibly conservative and unwilling to update their standards of practice long after the rest of the world has passed them by.

    Everyone who cares about epistemology has known for decades that p-values are a lousy way to communicate (im)plausibility. This is part and parcel of the Bayesian revolution. It's good that Nature is finally noticing, but it's not as if papers haven't been published in ApJ and similar journals since the '90's with curves showing the plausibility of hypotheses as positive statements.

    A p-value is the probability of the data occurring given the null hypothesis is true, and which in the strictest sense says nothing about the hypothesis under test, only the null. This is why the value cited in my .sig line is relevant: people who are innocent are not guilty. This rare case where there is an interesting binary opposition between competing hypothesis is the only one where p-values are modestly useful.

    In the general case there are multiple competing hypotheses, and Bayesian analysis is well-suited to updating their plausiblities given some new evidence (I'm personally in favour of biased priors as well.) The results of such an analysis is the plausibility of each hypothesis given everything we know, which is the most anyone can ever reasonably hope for in our quest to know the world.

    [Note on language: I distinguish between "plausibility"--which is the degree of belief we have in something--and "probability"--which I'm comfortable taking on a more-or-less frequentist basis. Many Bayesians use "probability" for both of these related by distinct concepts, which I believe is a source of a great deal of confusion, particularly around the question of subjectivity. Plausibilities are subjective, probabilities are objective.]

    • by geekoid ( 135745 )

      But publishing studies with Bayesian probability hasn't been the norm.
      It will be, and heir s a move to do so.
      IT's not fast, and it shouldn't be.

      "Many Bayesians use "probability" for both of these related by distinct concepts, which I believe is a source of a great deal of confusion"
      Sing it, brother!

      • by narcc ( 412956 )

        +5 Funny.

      • by ceoyoyo ( 59147 )

        It shouldn't be, and I hope it never will be. If you use a non-informative prior for your Bayesian analysis in most cases you're just doing extra work to get the same result. If you use an informative prior you're colouring your results with your preconceptions. The reader, or the author of a meta-analysis, is the one who should be doing the Bayesian analysis, using their own priors. By all means, report a Bayes factor to make the meta-analysis easier, but also report a p-value, which is a good metric f

    • To be fair, the reason (credible) researchers go to great lengths to control all possible influences except the one under study is because of the binary nature of P.

      Other than that nit-pick - Nice post (and your sig makes more sense now :), I offer an interesting article about stats, chaos, stock-markets, and fish [sciencenews.org] in appreciation.
  • Any researcher worth their salt states a p-value with enough additional information to understand if the p-value is actually meaningful. Anyone who looks at a paper and makes a conclusion besed solely (or largely) off a p-value without thinking about how meaningful the results are from a clinical or real-world perspective is being lazy or reckless.

    I guess there are quite a few insightful XKCD strips but this one seems most apt, here: http://xkcd.com/552/ [xkcd.com]

    • by tgv ( 254536 )

      Then there are very, very few researchers worth their salt. Even then, it has been shown that a .05 significance under ideal conditions has a chance of being a coincidence of about 1/3. If we add to that the number of errors in the assumptions, the experiments, the unpublished studies, etc., .05 means nothing. I found the work by Jim Berger et al. interesting: http://www.stat.duke.edu/~berg... [duke.edu]

  • I learnt the uselessness of statistics for guidance of correctness when trying to reduce my effort required at Sudoku. I've since discover the best way to win is not to play. Doesn't stop me trying though!

  • p-value is just the probability the data/observations were the result of a random process. So a great p value like 0.01 says the results were not random. They do not conform what made them non-random (ie theory).

    Epistimology is elementary, and often skipped by those who wish to persuade. "Figures do not lie, but liars figure."[Clemens]

    • You're using a quite confusing/inaccurate terminology here. The p-value is the probability of observing a statistic that's at least as large (or extreme) as what has been computed from the sample under the assumption of the "null" or default hypothesis. p=0.01 means that if the null hypothesis is correct, then the probability of observing what you just observed "or worse" is just 1%. A low p-value does not mean that under the _alternative_ hypothesis your results are necessarily "non-random". Normally, the

  • A few folk here have commented using incomplete or inaccurate definitions of p-values. A p-value is the probability of finding new data as or more extreme as data you observed assuming a null hypothesis is true. A couple of salient criticisms not mentioned in the article are a) why should more extreme data be lumped in with what was observed and b) what if "new" data can't sensibly be obtained.

    In a less technical sense, what the article didn't get into so much is that there is a strong publication bias towa

    • by ceoyoyo ( 59147 )

      Um, no. Your criticisms on't make sense. You're falling into a misunderstanding that is perpetuated because theoretical statisticians are so careful about how they define things, particularly when Bayesians might be looking over their shoulders.

      A p-value is the probability that accepting your statistical hypothesis (rejecting the null hypothesis) would be an error. This is equivalent to saying that the p-value is the probability that, picking a random run out of many runs of your experiment, you'd expect

      • No, you are wrong, the op is correct. Especially: "A p-value is the probability that accepting your statistical hypothesis (rejecting the null hypothesis) would be an error." is wrong and "A p-value is the probability of finding new data as or more extreme as data you observed assuming a null hypothesis is true." is correct.
  • by DVega ( 211997 ) on Wednesday February 12, 2014 @05:30PM (#46233093)
    There is a classic article by Jacob Cohen [uci.edu] on this subject.

    Also there is a simpler analysis of the above article [reid.name]

  • "A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit."
    • That's exactly what it means. This is why for the idea to be accepted or consensus to be reached, you need a lot more than one study.

    • "A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit."

      This is complete bullshit. If you study something where the h1 is true then there is a 0% possibility to be wrong if you report significant findings.

    • by zachie ( 2491880 )
      Wrong, it is much, much worse than that.

      Imagine a body of scholars continuously producing wrong hypotheses. They test all of them. Your teacher correctly pointed that one in twenty will have a p-value > 0.05. But they write papers only off these! In such a scenario, 100% of the papers are wrong.

      In other words, this 5% chance of a paper being bullshit is only a lower bound.
      • Exactly. However, that's not a difficult problem to solve. What the Nature article fails to address is the real problem: it's not easy to publish papers that do nothing but confirm the findings of another paper.

        The article talks about how a researcher had his dreams of being published dashed once he failed to achieve a similar p-value upon attempting to reproduce his own research. This is bullshit. Journals should be selective, yes. They should be selective in terms of whether experiments have been ru

  • Correlation != Causation

  • At least in bioiniformatics, the correction of p-values for multiple comparisons ("q-values") has been standard practice for quite a while now.

    • by ceoyoyo ( 59147 )

      Please tell me they don't really call them 'q-values'?

      A p-value IS corrected for multiple comparisons. If you did multiple comparisons and you didn't correct it, it ain't a p-value. A good term for those would be "the result of my fishing expedition."

      • by tgv ( 254536 )

        But with what correction? There isn't one correction for multiple comparisons, and they all have their problems. Just go Bayesian instead.

        • by ceoyoyo ( 59147 )

          Bayesian statistics still needs multiple comparison correction, and it's usually more complicated. Bayesian stats lets you quantitatively combine the results from multiple tests of the same thing, usually in different datasets, i.e. different experiments testing the same hypothesis. If you're doing that with frequentist stats you don't need multiple comparison correction either.

          If you're testing multiple different things, usually in the same dataset, you need to do multiple comparison correction, Bayesian

    • At least in bioiniformatics, the correction of p-values for multiple comparisons ("q-values") has been standard practice for quite a while now.

      But then your beta-error goes through the roof and you wont find anything. wouldnt it be far more efficient to repeat the significant experiments.

  • Torturing the data (Score:4, Informative)

    by floobedy ( 3470583 ) on Wednesday February 12, 2014 @08:31PM (#46234589)

    One variant of "p-hacking" is "torturing the data", or performing the same statistical test over and over again, on slightly different data sets, until you get the result that you want. You will eventually get the result you want, regardless of the underlying reality, because there is 1 spurious result for every 20 statistical tests you perform (p=0.05).

    I remember one amusing example, which involved a researcher who claimed that a positive mental outlook increases cancer survival times. He had a poorly-controlled study demonstrating that people who keep their "mood up" are more likely to survive longer if they have cancer. When other researchers designed a larger, high-quality study to examine this phenomenon, it found no effect. Mood made no difference to survival time.

    Then something interesting happened. The original researcher responded by looking for subsets of the data from the large study, to find any sub-groups where his hypothesis would be confirmed. He ended up retorting that "keeping a positive mental outlook DID work, according to your own data, for 35-45 year-old east asian females (peven if the p value was 0.05.

    This kind of thing crops up all the time.

    • Whoops. The post got garbled because slashdot wrongly interpreted the less-than sign as an html tag opening, and I didn't escape it. (Which seems like a bug to me. The text "p<0.05" is obviously not the beginning of an html tag, because no html tags accepted by slashdot begin with 0.05). Anyway, the offending paragraph should say:

      Then something interesting happened. The original researcher responded by looking for subsets of the data from the large study, to find any sub-groups where his hypothesis would

  • Lies ... damned lies ... and statistics. The P value only tells you if there is statistical significance in the data, not whether your hypothesis is correct or incorrect.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...