Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Psychologists Strike a Blow For Reproducibility 138

ananyo writes "Science has a much publicized reproducibility problem. Many experiments seem to be failing a key test of science — that they can be independently verified by another lab. But now 36 research groups have struck a blow for reproducibility, by successfully reproducing the results of 10 out of 13 past experiments in psychology. Even so, the Many Labs Replication Project found that the outcome of one experiment was only weakly supported and they could not replicate two of the experiments at all."
This discussion has been archived. No new comments can be posted.

Psychologists Strike a Blow For Reproducibility

Comments Filter:
  • Good news, everyone! (Score:2, Interesting)

    by Anonymous Coward

    We ought to be encouraging this sort of thing.

  • Psychology (Score:5, Funny)

    by Anonymous Coward on Tuesday November 26, 2013 @10:15PM (#45534539)

    If only Psychology was a science.

    • is to.
      • by Anonymous Coward

        But isn't really able to hold on to much of the scientific method because there's so much we don't know.

        Alchemy isn't a science, but it led to chemistry which was.

        Astrology isn't a science, but it led to astrophysics which was.

        Because the methods Alchemy and Astrology used to get BETTER predictions was the nascent scientific method and keeping what worked and ejecting what didn't was part of that method (which a lot of social science doesn't do, worse luck). And the remains were the germs of the science tha

      • by mcgrew ( 92797 ) *

        is to

        science as a virgin is to a hooker.

        (I assume you were looking for a punch line and didn't misspell "too". How could anyone misspell a three letter word?)

    • Yes, Dr. Cooper. We agree with you. But at least they weren't liberal arts majors.

    • Re:Psychology (Score:5, Informative)

      by __aaltlg1547 ( 2541114 ) on Tuesday November 26, 2013 @11:14PM (#45534875)

      If the experiments are reproducible, it's science.

      Apparently it's biochemistry that is not a science.
      http://online.wsj.com/news/articles/SB10001424052970203764804577059841672541590 [wsj.com]

    • Re:Psychology (Score:5, Interesting)

      by Lamps ( 2770487 ) on Tuesday November 26, 2013 @11:44PM (#45535027)

      The ironic thing about statements like these is that they usually come from people with no scientific training in any field, nor any meaningful training in statistics, but only a "sciency" inclination and questionable, popular distillation-derived knowledge of some principles from what they consider "the hard sciences".

      Sadly, this irony will be lost on the people making such statements, who will, for some unfathomable reason, continue to disparage people doing meaningful work in the sciences, while never coming close to accomplishing anything of the sort themselves.

      Actual academics have an idea of the hard work involved in contributing to the human knowledge base in all scientific disciplines, and thus, tend to respect each other's work (as long as others don't step on their own toes in their particular area of specialization, in which case, prepare for turbulence).

      • by Anonymous Coward
        "popular distillation-derived knowledge "

        That dash implies the knowledge comes from a crowd at a bar, is this correct?

      • The original model held that psychotherapy could cure depression. Talk to your analyst once a week and after years of treatment you got better.

        Then it was discovered that low norepinephrine caused depression, and tricyclics fixed that and cured depression.

        Then it was discovered that low serotonin caused depression, and SSRIs fixed that and cured depression.

        Then it was discovered that low dopamine caused depression, and MAOIs fixed that and cured depression.

        And recently, the The New England Journal of Medici

        • by dabadab ( 126782 )

          And recently, the The New England Journal of Medicine reported depression meds have no effect.

          That's patently untrue. The Huffingtonpost article you link to is wildly inaccurate, self-contradictory and much more about sensationalism than the actual NEJoM article.

          • And recently, the The New England Journal of Medicine reported depression meds have no effect.

            That's patently untrue. The Huffingtonpost article you link to is wildly inaccurate, self-contradictory and much more about sensationalism than the actual NEJoM article.

            I'm just pointing out what doctors are saying. If they are wrong, please tell us what the study actually means. In particular, this line from the NEJM study:

            According to the FDA, 38 of the 74 registered studies had positive results (51%; 95% CI, 39 to 63)

            Doctors, including the one in the linked HufPo article, are stating specifically that antidepressants don't work base don this study. In so many words, directly from the article.

            Since doctors are wrong, presumably because they don't have meaningful training in science or statistics, who then should be the go-to expert to interpret these findings?

            • If about half of the studies showed a positive effect, then it is hardly a proof that there is no effect. It may not be sufficient to show that it has an effect, but it's a clear hint that it might have. To make better statements one would have to have a closer look at the studies in question, because not every study has the same quality. In the extreme case, all of the studies showing a positive effect might be flawed, while those showing no effect might be sound (in which case, the claim that there's no p

        • Re:A quick question (Score:4, Informative)

          by steelfood ( 895457 ) on Wednesday November 27, 2013 @03:51AM (#45535993)

          Medication belongs to the field of psychiatry. And for the most part, they do have an effect. But it's only temporary, and the human body gets used to it after a while. So in the long term, medicine is largely useless, and in fact is counterproductive, as they tend to cause other, worse effects ("side" effects). But in the short term, it helps.

          All systems have a state of equilibrium, a state of stability. The same holds for the body and the mind, two different but related and dependent systems. They'll always tend towards the state of equilibrium because that's the path of least resistance.

          Psychological ills are not the equilibrium being tipped, but the point of equilibrium itself changing. To truly "cure" someone of depression or OCD or bipolar, you have to change the point of equilibrium itself. That's much, much harder than you can imagine, and a far greater challenge than any pill will ever resolve. Those whose equilibrium were changed by an event in their life are easier to change back than those who are born with a certain equilibrium. Some people call the former nature vs. nurture. I call it, again, the past of least resistance.

          Psychology is not attempting to medicate everyone. It's attempting to explain in terms familiar to the scientific-minded humanity.

          • Some people call the former nature vs. nurture. I call it, again, the past of least resistance.

            Freudian slip? ;)

        • by rvw ( 755107 )

          One last question... just one*.

          Is psychology evidence-driven, or belief-driven?

          (*) This isn't just me asking. Here's a quote from the The New England Journal of Medicine [nejm.org] article:

          Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.

          (**) Also, I have no meaningful training in science or statistics. If you want, you can win the argument by pointing this out in your response.

          Read this book: Why zebras don't get ulcers [amazon.com]. It explains how stress influences our life, and how complex the system works that tries to regulate this. It shows that it all works beautifully, for people living in the wild, but the system is not so good for us.

          This book won't give you the solution to depression, but it will show that the body uses many methods to accomplish several things, like redirecting sources when in danger or in rest. There is not one solution - any solution will have side effects, and

        • by Lamps ( 2770487 )

          (**) Also, I have no meaningful training in science or statistics. If you want, you can win the argument by pointing this out in your response.

          It's not my intention to get into any arguments or win anything. When I got snarky above, it was to get some people to consider whether they're qualified to disparage the work of others. Anyway, you raise a good question.

          Like all sciences, psychology entails a set of beliefs/theories/ideas/models. These constructs should be informed by evidence gathered through a certain methodology. As new evidence is gathered, old models are continually revised or superceded by new ones in an iterative process that's inte

      • Without a doubt, there is a LOT of unscientificness going on in the field of psychology. Look at the satanic ritual abuse situation of the 80s [wikipedia.org] for an obvious example, especially when you realize that even today there are still psychologists treating patients for this 'malady.'

        Another example is the DSM, which is a joke from a scientific perspective [wikipedia.org] (though this is perhaps an indication of the difficulty of the field of psychology as much as anything).
        • Without a doubt, there is a LOT of unscientificness going on in the field of psychology. Look at the satanic ritual abuse situation of the 80s [wikipedia.org] for an obvious example, especially when you realize that even today there are still psychologists treating patients for this 'malady.'

          Another example is the DSM, which is a joke from a scientific perspective [wikipedia.org] (though this is perhaps an indication of the difficulty of the field of psychology as much as anything).

          Psychology is like quantum physics, except that not only does observing something change the behavior, but the degree and kind of change is likewise unpredictable. Isaac Asimov understood that decades ago. "Psycohistory" depended on A) large populations, to smooth out personal variances and B) keeping most of the mechanism out of sight so that people wouldn't factor the predicted results into their behavior.

          A lot of what's wrong with mental health in general is that we're still chipping flints. As new studi

    • Unfortunately there are 2 major hurdles that limit all but the most groundbreaking experiments:

      1. Money
      2. Glory

      Money is probably the biggest factor, there just isn't enough money allotted to trying to reproduce experiments. Most budgets only exist for new/continuing research, not verifying experiments done by others. And as the cost of doing experiments rises(more sophisticated equipment necessary, lots of paid "volunteers" etc) this is only going to get worse.

      Second, although not as important, is
      • There's no shortage of money. From http://www.bis.org/publ/otc_hy1311.pdf [bis.org] :

        The latest BIS statistics on OTC derivatives markets show that notional amounts outstanding totalled $693 trillion at end-June 2013.

        In the face of so much global liquidity, it's ridiculous to say there's not enough money for food stamps, or science, or free health care.

        • by khallow ( 566160 )
          Notional amounts of derivative securities have nothing to do with money.

          For example, I recently traded some stock options. The actual value was around $600 at the time of the sale. The notional amount, which happens to be the stock's price trigger at which the options activate times the number of options, was $13,000. You do the math.
          • Even if you reduce it by an order of magnitude, that's still $60 trillion. And estimates of the shadow banking system [slashdot.org] are $100 trillion.

            There's no shortage of money. The only scarcity is the lack of political will to fund basic social services and science. There is no economic or physical necessity preventing us from funding food stamps, science research, health care, a basic income.

            • by khallow ( 566160 )

              Even if you reduce it by an order of magnitude, that's still $60 trillion.

              And if you reduce it by two orders of magnitude, that's still $6 trillion.

              There's no shortage of money. The only scarcity is the lack of political will to fund basic social services and science. There is no economic or physical necessity preventing us from funding food stamps, science research, health care, a basic income.

              I agree that's there's no "shortage of money". But these things which you mention do not have that much value to them. For example, most people earn enough to pay for their own food, health care, and still have a basic income.

              Also, efforts to insure that these services are available in turn harms the providing of these services. A lot of effort has been made to turn corn inefficiently into a gasoline additive. The motives are mostl

      • by bakes ( 87194 )

        You forgot 'Politics'. Not sure where that fits in the ordering though.

    • If only Psychology was a science.

      The sad thing is that it's on par with the level of TV Tropes. They first use their pattern matching brains to notice some pattern, then go seek out quantifications for it. That's ridiculous. That's why literally everything is a trope -- even the tropeless story is a trope. The same goes for psychological classifications and categorizations of behaviours. Some psychologists claim to study cognitive bias -- Yet their own confirmation bias has blinded them to the fact that their distinctions themselves w

    • by rtaylor ( 70602 )

      Science is a process, not a field of study or a result.

      That process can be applied to anything where you want to find a fact.

    • That is a very disingenuous statement. While I would agree that there are many aspects of Psychology/Psychiatry that are not very scientific, there are some areas that are rigorous. Neuropsychology can be a good example. Testing and measurement is another good example. There is a lot more to Psychology than the hokey therapy that we think of.

    • How is this rated Funny? Psychology is not a Science.

    • And there you have it. Getting chemistry to work in the lab is hard enough. Experimental psychology results would never stand up to the same scrutiny.

  • Not bad at all (Score:5, Interesting)

    by TwineLogic ( 1679802 ) on Tuesday November 26, 2013 @10:16PM (#45534549)
    So 2 or 3 out of 13 were not reproduced in these attempts. I imagine the standard was "P 0.05." If you then consider ANOVA, the collection of 13 studies did not perform poorly at all.
    • Re:Not bad at all (Score:4, Insightful)

      by Anonymous Coward on Tuesday November 26, 2013 @10:57PM (#45534805)

      The fact that people are trying to reproduce the experiments is good news in and of itself.

      • I agree. Science goes through the upgrade of hypothesis, to tested results, to verified results, to working theories, and eventually laws (although the line between the last two is arbitrary in modern science, really.) The more results that are tested again and again, the better science is as a whole.
      • Re:Not bad at all (Score:5, Insightful)

        by bondsbw ( 888959 ) on Tuesday November 26, 2013 @11:17PM (#45534885)

        If the scientific community valued reproducibility as much as original work, we would solve 2 problems:

        1) Science without confirmation can lead us astray for years.
        2) There are plenty of scientists who a great at experimentation but lousy at coming up with new ideas, and these scientists (or potential scientists) may not be finding their full potential.

        And while we're at it, let's value failed experiments as much as successful experiments.

        • 1) Science without confirmation can lead us astray for years.

          It's not science if it's not tested. It's just making assumptions.

          It's also not science if you just take their word for it when they said they tested it. It might have been originally, but not any more.

        • by Trogre ( 513942 )

          Okay, but don't confuse a failed experiment with a successful experiment that returned negative results.

    • Re:Not bad at all (Score:5, Interesting)

      by phantomfive ( 622387 ) on Wednesday November 27, 2013 @02:09AM (#45535639) Journal
      No, unfortunately, because they didn't choose the studies to reproduce randomly. FTA:

      [The studies chosen for reproduction] included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey

      At least some of these were fairly important research, which ideally would have been verified more than once. That there was any doubt that they would be reproducible is worrisome in itself.

      • by fatphil ( 181876 )
        That 20% failed to be reproduceable is just as worrying.,
      • by toQDuj ( 806112 )

        This is very important, as they did not randomly pick studies but rather chose the ones they "Deemed Worthy". As they did not want to be proven bad scientists (I assume), their conscious or unconscious bias will have been towards sound or easy studies.

      • Re:Not bad at all (Score:5, Informative)

        by noobermin ( 1950642 ) on Wednesday November 27, 2013 @10:25AM (#45538453) Journal

        Did you read TFA? Or did you choose sentences to read randomly? Those we're quoted as the results that worked. In fact, here is the original paragraph:

        Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested.

        Two that didn't were about social priming, one was currency priming, in which participants supported what I assume is the current state of capitalism after seeing money, and the other, priming feelings of patriotism with a flag. Moreover, both original authors we're positive about it:

        Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect”, given the “vastly larger and more diverse sample” of the Many Labs project. Both researchers praised the initiative.

        There you go, quoting the article directly since you can't be bothered to read it. It is true that they apparently chose what some consider to be important effects and the evidence against social priming is upsetting to some. Still, the fact that verification actually happened and people are happy about it shows science is alive and kicking.

        Anyway, another cool thing about this study should be that it uses this thing, the open science framework [openscienceframework.org] which I haven't heard about until today, but seems pretty cool.

        • Work on your reading comprehension, man. You failed to show any understanding of the post you responded to.
  • Clearly (Score:2, Funny)

    by msobkow ( 48369 )

    Scientists must have *more sex* if they are to reproduce... :P

  • by TheloniousToady ( 3343045 ) on Tuesday November 26, 2013 @10:53PM (#45534783)

    I'll believe it when I see it.

    • by Anonymous Coward

      And then see it again.

  • by Dahamma ( 304068 ) on Tuesday November 26, 2013 @10:55PM (#45534797)

    The "problem" with experiments that aren't reproducible may not be with the experiments as much as with the popular media that decides to make sweeping generalizations based on one result. Though I guess some blame definitely needs to be applied to the researcher who allows unverified results to be misrepresented to get that 15 minutes of fame in a quote in The Guardian or USA Today...

    • The whole point of an experiment is to run enough trials to gain statistical confidence. It's supposed to be it's own validation in that sense.

      So it's either a systematic error in their experiment, or fraud.

      • by fatphil ( 181876 )
        Something passed through our hands here at the office recently. A "scientific" study, in the very soft field of human behaviour, where their sample size was 27. And that set was split into 4 groups. Absolutely any result from that experiment was possible and could be explained as pure random chance, not deviating from the null hypothesis.

        Reading works like that, and others from the same authors and institutes, we got the feeling that it was mostly a delusion that they were doing science. Like how when a 4-y
        • there's a delusion that it's actually doing the washing up, and you really don't want to spoil its fun by breaking that illusion - it's not doing any harm, is it?

          The harm comes when they "dry" the dishes with the dish towel, getting it all nasty and dirty, and then put the dishes away in the cupboard still dirty, contaminating the other dishes they come into contact with. There's a direct analogy to be made here.

      • But don't be quick to assume it's fraud, because it's very, very easy to make a systematic error in designing an experiment.

    • by Lamps ( 2770487 )

      That's not really a problem from the perspective of scientists - in the fields of psychology, cog sci, and neuroscience, I've never encountered an instance of a researcher using any popular media distillation of some study as a meaningful source of info on that study (aside from making them aware of the study's existence).

      Also, you seem to be assigning some a priori status of reproducibility or lack thereof to some studies, which really confuses the issue. For example, what does it mean for an experiment to

      • For example, what does it mean for an experiment to be "not reproducible"?

        It means that, as a scientist, you have failed to produce a worthwhile experiment. Hypothesis is first. A road map for reproducibility is next. If said map does not function, you have failed.

        • by Lamps ( 2770487 )

          I guess I should clarify the larger epistemic point at which I was hinting. That others may not, in some reasonable number of attempts, reproduce an experiment does not mean that the experiment is categorically not reproducible. Any number of things, such as lab conditions (which are not, in practical, absolute terms, reproducible within a lab, much less between different labs), can influence the results of experiments, and while adhering to certain sound methodological principles abstracts away a lot of th

          • Isn't the ability to reproduce results based on the "idea" in the methods section central to the concept of scientific "reproducibility"? If I claim I applied one set of methods and got a certain result --- but those methods are different from the physical reality of my setup that reasonable adherence to the stated methods will produce a vastly different result --- then I have failed at publishing a "reproducible" experiment.

            Example:
            "By the method of releasing lead spheres at rest into the air, I have obser

            • Sorry, I have no idea how Slashdot managed to completely mangle my post, cutting out a big chunk in the middle, after previewing and submitting. I don't have the patience to re-write it, so just ignore the garbled mess left after Slashdot's unexpected redaction of a whole middle paragraph.

            • by Lamps ( 2770487 )

              Don't worry about the post getting cut off - the point you were trying to make is clear. To address the idea that "the methods are different from the physical reality of my setup" - it's simply not possible to be comprehensive in the instructions you provide when you write up your methods, and furthermore, in reality, variables are often introduced in a lab which impact experimental results, but which are not accounted for in the methods writeup because the authors are not aware of these variables (this is

              • No, I still think that, if the methods are insufficiently described (do not sufficiently capture the nuances of the physical experiment) to permit reliable replication of results, then this makes an experiment un-reproducible by definition.

                Indeed, allowances must be made for the study of ephemeral phenomena --- people's perceptions in some particular time and place; a comet passing by once --- that preclude actual recreation of the experiment. So, I think there is a "methodological description" requirement

      • For example, what does it mean for an experiment to be "not reproducible"? You can fail to reproduce the results of an experiment, but proving that a result is not reproducible, or somehow knowing that it isn't, is a different issue altogether.

        You make absolutely no sense here. If I follow the original researcher's notes and experiment design correctly and cannot obtain his result, the experiment was not reproducible. That's what "not reproducible" *mean*--I couldn't reproduce his results.

      • by Dahamma ( 304068 )

        That's not really a problem from the perspective of scientists

        Most people here are not looking it from the perspective of scientists, though, but from the (as the article states) "much publicized" perspective.

        For example, what does it mean for an experiment to be "not reproducible"?

        Exactly. You should be asking that question about the article, not to me :) In fact I do have a neuroscience background, even if that's not what I am doing these days... but that is pretty much irrelevant to this thread, which was to start a conversation about how the mainstream media tends to latch onto any unverified/reproduced study and report it as the cano

    • If you're going to pick a paper then The Guardian was not a good choice, given Dr Ben Goldacre writes a regular column for the called "Bad Science" where he critiques terrible science reporting in the media (amongst other things).

      • by Dahamma ( 304068 )

        But unfortunately not all of his colleagues share his standards for scientific repairing - there have been plenty of horribly reported scientific studies in the Guardian, as well as almost all "popular media" sources that just can't help it...

  • by trudyscousin ( 258684 ) on Tuesday November 26, 2013 @11:33PM (#45534965)

    ...and thought, "Now there's a ray of sunshine for Slashdotters."

    As a group, that is.

    • Funny, because there's another study recently published that said technology is killing everyone's sex lives. It's probably old news here, as it's just a downward extrapolation of the extreme case found here.

  • I thought at first it was saying 36 groups each tried to reproduce the results of 13 experiments, and all 36 were successful with 10 of 13 (though not necessarily the same ten), successfully reproducing the results of a reproducibility meta-experiment.

  • So...they couldn't reproduce the reproducibility problem...?

  • Were they not able to reproduce the outcome of an experiment, or were they not able to reproduce the whole experiment (as in "We assume that during a total eclipse in the month of may" or... "to reproduce this, take any old Large Hadron Collider lying around...")

    • by fatphil ( 181876 )
      Apparently, there were no "experiments", in the lab/contrivance sense - it was just a questionaire.

      Regarding the failures:
      "Of the 13 effects under scrutiny in the latest investigation, one was only weakly supported, and two were not replicated at all. Both irreproducible effects involved social priming. In one of these, people had increased their endorsement of a current social system after being exposed to money[3]. In the other, Americans had espoused more-conservative values after seeing a US flag[4]."

      [3
  • Is this news? The Decline Effect [newyorker.com] has been known about (and generally ignored) for years.
  • I have a particular experiment which, in my mind, highlights an aspect of human nature we would all prefer to deny. We all have within us the capacity to ruthlessly abuse others, even and especially friends and family when given the opportunity. This famous experiment [wikipedia.org] a group of peers where some were assigned the role of prison guard and others that of prisoner. It really didn't take long before things went really bad.

    I have often heard that corruption is a problem of opportunity more than of character.

  • "Science has a much publicized reproducibility problem" links to an article that says "as many as 17â"25% of such findings are probably false", but a group reproducing 10 out of 13 experiments (23% not replicable) is striking a blow for reproducibility?

  • Reproducing 10 out of 13 experiments is really quite good.

    I'm a materials physicist, and if I could reproduce that ratio of experiments in my field, I would be very happy and a little bit surprised.

    This problem of non-reproducible results in science is due to poor training, poor writing, poor experiment design and a direct link between citations and funding.

  • Because that's how "science" works in Psychology.
    You come up with some ridiculous simple experiment, like giving people the same amount of money for a simple boring task, or a complex creative task.
    Turns out most people go for the simple boring one.
    Study Conclusion: People prefer simple boring tasks!
    My Conclusions: why take more risk in screwing up when you can make the same money with something easy?

    Who says these researchers didn't select experiments they a-priori thought seemed reasonable?

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...