Psychologists Strike a Blow For Reproducibility 138
ananyo writes "Science has a much publicized reproducibility problem. Many experiments seem to be failing a key test of science — that they can be independently verified by another lab. But now 36 research groups have struck a blow for reproducibility, by successfully reproducing the results of 10 out of 13 past experiments in psychology. Even so, the Many Labs Replication Project found that the outcome of one experiment was only weakly supported and they could not replicate two of the experiments at all."
Good news, everyone! (Score:2, Interesting)
We ought to be encouraging this sort of thing.
Not bad at all (Score:5, Interesting)
The problem isn't necessarily reproducibility (Score:5, Interesting)
The "problem" with experiments that aren't reproducible may not be with the experiments as much as with the popular media that decides to make sweeping generalizations based on one result. Though I guess some blame definitely needs to be applied to the researcher who allows unverified results to be misrepresented to get that 15 minutes of fame in a quote in The Guardian or USA Today...
Re:Psychology (Score:5, Interesting)
The ironic thing about statements like these is that they usually come from people with no scientific training in any field, nor any meaningful training in statistics, but only a "sciency" inclination and questionable, popular distillation-derived knowledge of some principles from what they consider "the hard sciences".
Sadly, this irony will be lost on the people making such statements, who will, for some unfathomable reason, continue to disparage people doing meaningful work in the sciences, while never coming close to accomplishing anything of the sort themselves.
Actual academics have an idea of the hard work involved in contributing to the human knowledge base in all scientific disciplines, and thus, tend to respect each other's work (as long as others don't step on their own toes in their particular area of specialization, in which case, prepare for turbulence).
Re:Psychology (Score:5, Interesting)
Psychology is a huge field. Perception, experimental analysis of animal behaviour, clinical psychology, cognitive biases etc. etc. (Note that only one of those involves psychiatrists.) Some bits allow for harder science than other bits.
I personally don't know enough about psychiatry to form a judgement on how scientific they are, but unlike you, at least I know what a psychologist is (or something of the range that they could be.) Your trite dismissal says much about your ignorance and nothing about psychology.
Re:Not bad at all (Score:5, Interesting)
[The studies chosen for reproduction] included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey
At least some of these were fairly important research, which ideally would have been verified more than once. That there was any doubt that they would be reproducible is worrisome in itself.
Re:Psychology (Score:5, Interesting)
Psychology is a soft science because of the numerous variables that in studies are often simplified into a constant often for simplicity's sake and nothing else. Economics and politics are the same, mostly because they're based on psychology.
It's an inexact science because the human condition is imperfect. As opposed to the hard sciences, which are exact, because the universe around us is "perfect". And then, there's computer science, which is a mathematical, computational science that's absolute. It's not even "perfect" anymore; it's exactly what the maths say it is, and any failure sits between keyboard and chair.
Anyway, psychology is important, because the only way to truly understand the imperfect conditions of humans is via an inexact science. And it's something only fully understood by humans (computers can simulate the hard sciences to a calculable degree of accuracy, but they'll never be able to simulate the soft sciences in the same way), and innately at that.
The way to think about psychology is using fractals. X% | X is > statistical significance, of the population behaves in manner a. X * (100-X)% of the population behaves in manner b. X * (100 - X * (100-X))% of the population behaves in manner c. Etc. a, b, c, etc. are up to you to figure out. And when you change the test, the individual that falls into one category is not guaranteed to fall into the same category again.
Note that the human mind can comprehend infinity (poorly for most, but very possible for a few), both countable and uncountable variants, but a computer will never be able to calculate it. So the fractal analogy works really, really well.