Psychologists Strike a Blow For Reproducibility 138
ananyo writes "Science has a much publicized reproducibility problem. Many experiments seem to be failing a key test of science — that they can be independently verified by another lab. But now 36 research groups have struck a blow for reproducibility, by successfully reproducing the results of 10 out of 13 past experiments in psychology. Even so, the Many Labs Replication Project found that the outcome of one experiment was only weakly supported and they could not replicate two of the experiments at all."
Good news, everyone! (Score:2, Interesting)
We ought to be encouraging this sort of thing.
Re: (Score:2)
Yes, that's a good point. What measures do you think "we," which presumably refers to mild-to-moderately informed laypeople, can put into place to help?
Not trying to be sarcastic, I genuinely wonder what practical options exist.
Re:Good news, everyone! (Score:4, Funny)
Have you ever BEEN to an APA conference?
My dear god, man! The last thing you want is these people REPRODUCING!
Re: (Score:2)
The only thing sadder than that joke is the fact that you thought it was worth making.
McLaughlin Group answer: Mods say "WRONG!"
Psychology (Score:5, Funny)
If only Psychology was a science.
Re: (Score:1)
It operates the scientific method (Score:1)
But isn't really able to hold on to much of the scientific method because there's so much we don't know.
Alchemy isn't a science, but it led to chemistry which was.
Astrology isn't a science, but it led to astrophysics which was.
Because the methods Alchemy and Astrology used to get BETTER predictions was the nascent scientific method and keeping what worked and ejecting what didn't was part of that method (which a lot of social science doesn't do, worse luck). And the remains were the germs of the science tha
Re: (Score:2)
is to
science as a virgin is to a hooker.
(I assume you were looking for a punch line and didn't misspell "too". How could anyone misspell a three letter word?)
Re: (Score:1)
Yes, Dr. Cooper. We agree with you. But at least they weren't liberal arts majors.
Re:Psychology (Score:5, Informative)
If the experiments are reproducible, it's science.
Apparently it's biochemistry that is not a science.
http://online.wsj.com/news/articles/SB10001424052970203764804577059841672541590 [wsj.com]
Re:Psychology (Score:5, Informative)
Psychology ran in a major hiccup, as many of it's experiments are no longer reproducible not because of bad 'science' but because they are considered naughty and not something that should really be done to people to test out psychological theories, as in http://www.bps.org.uk/what-we-do/ethics-standards/ethics-standards [bps.org.uk] (I used British standards rather than US ones, as the US ones have so badly been mauled by the US government and their fully medically and psychological researched mass torture facility at GITMO that the US ones are rules that 'should be' broken as defined by the US government) and http://mentalfloss.com/article/52787/10-famous-psychological-experiments-could-never-happen-today [mentalfloss.com].
Re:Psychology (Score:5, Interesting)
The ironic thing about statements like these is that they usually come from people with no scientific training in any field, nor any meaningful training in statistics, but only a "sciency" inclination and questionable, popular distillation-derived knowledge of some principles from what they consider "the hard sciences".
Sadly, this irony will be lost on the people making such statements, who will, for some unfathomable reason, continue to disparage people doing meaningful work in the sciences, while never coming close to accomplishing anything of the sort themselves.
Actual academics have an idea of the hard work involved in contributing to the human knowledge base in all scientific disciplines, and thus, tend to respect each other's work (as long as others don't step on their own toes in their particular area of specialization, in which case, prepare for turbulence).
Re: (Score:1)
That dash implies the knowledge comes from a crowd at a bar, is this correct?
A quick question (Score:2, Insightful)
The original model held that psychotherapy could cure depression. Talk to your analyst once a week and after years of treatment you got better.
Then it was discovered that low norepinephrine caused depression, and tricyclics fixed that and cured depression.
Then it was discovered that low serotonin caused depression, and SSRIs fixed that and cured depression.
Then it was discovered that low dopamine caused depression, and MAOIs fixed that and cured depression.
And recently, the The New England Journal of Medici
Re: (Score:3)
That's patently untrue. The Huffingtonpost article you link to is wildly inaccurate, self-contradictory and much more about sensationalism than the actual NEJoM article.
Re: (Score:1)
That's patently untrue. The Huffingtonpost article you link to is wildly inaccurate, self-contradictory and much more about sensationalism than the actual NEJoM article.
I'm just pointing out what doctors are saying. If they are wrong, please tell us what the study actually means. In particular, this line from the NEJM study:
According to the FDA, 38 of the 74 registered studies had positive results (51%; 95% CI, 39 to 63)
Doctors, including the one in the linked HufPo article, are stating specifically that antidepressants don't work base don this study. In so many words, directly from the article.
Since doctors are wrong, presumably because they don't have meaningful training in science or statistics, who then should be the go-to expert to interpret these findings?
Re: (Score:1)
If about half of the studies showed a positive effect, then it is hardly a proof that there is no effect. It may not be sufficient to show that it has an effect, but it's a clear hint that it might have. To make better statements one would have to have a closer look at the studies in question, because not every study has the same quality. In the extreme case, all of the studies showing a positive effect might be flawed, while those showing no effect might be sound (in which case, the claim that there's no p
Re:A quick question (Score:4, Informative)
Medication belongs to the field of psychiatry. And for the most part, they do have an effect. But it's only temporary, and the human body gets used to it after a while. So in the long term, medicine is largely useless, and in fact is counterproductive, as they tend to cause other, worse effects ("side" effects). But in the short term, it helps.
All systems have a state of equilibrium, a state of stability. The same holds for the body and the mind, two different but related and dependent systems. They'll always tend towards the state of equilibrium because that's the path of least resistance.
Psychological ills are not the equilibrium being tipped, but the point of equilibrium itself changing. To truly "cure" someone of depression or OCD or bipolar, you have to change the point of equilibrium itself. That's much, much harder than you can imagine, and a far greater challenge than any pill will ever resolve. Those whose equilibrium were changed by an event in their life are easier to change back than those who are born with a certain equilibrium. Some people call the former nature vs. nurture. I call it, again, the past of least resistance.
Psychology is not attempting to medicate everyone. It's attempting to explain in terms familiar to the scientific-minded humanity.
Re: (Score:2)
Some people call the former nature vs. nurture. I call it, again, the past of least resistance.
Freudian slip? ;)
Re: (Score:2)
One last question... just one*.
Is psychology evidence-driven, or belief-driven?
(*) This isn't just me asking. Here's a quote from the The New England Journal of Medicine [nejm.org] article:
Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.
(**) Also, I have no meaningful training in science or statistics. If you want, you can win the argument by pointing this out in your response.
Read this book: Why zebras don't get ulcers [amazon.com]. It explains how stress influences our life, and how complex the system works that tries to regulate this. It shows that it all works beautifully, for people living in the wild, but the system is not so good for us.
This book won't give you the solution to depression, but it will show that the body uses many methods to accomplish several things, like redirecting sources when in danger or in rest. There is not one solution - any solution will have side effects, and
Re: (Score:1)
(**) Also, I have no meaningful training in science or statistics. If you want, you can win the argument by pointing this out in your response.
It's not my intention to get into any arguments or win anything. When I got snarky above, it was to get some people to consider whether they're qualified to disparage the work of others. Anyway, you raise a good question.
Like all sciences, psychology entails a set of beliefs/theories/ideas/models. These constructs should be informed by evidence gathered through a certain methodology. As new evidence is gathered, old models are continually revised or superceded by new ones in an iterative process that's inte
Re: (Score:1)
Another example is the DSM, which is a joke from a scientific perspective [wikipedia.org] (though this is perhaps an indication of the difficulty of the field of psychology as much as anything).
Re: (Score:2)
Without a doubt, there is a LOT of unscientificness going on in the field of psychology. Look at the satanic ritual abuse situation of the 80s [wikipedia.org] for an obvious example, especially when you realize that even today there are still psychologists treating patients for this 'malady.'
Another example is the DSM, which is a joke from a scientific perspective [wikipedia.org] (though this is perhaps an indication of the difficulty of the field of psychology as much as anything).
Psychology is like quantum physics, except that not only does observing something change the behavior, but the degree and kind of change is likewise unpredictable. Isaac Asimov understood that decades ago. "Psycohistory" depended on A) large populations, to smooth out personal variances and B) keeping most of the mechanism out of sight so that people wouldn't factor the predicted results into their behavior.
A lot of what's wrong with mental health in general is that we're still chipping flints. As new studi
Re: (Score:2)
Re: (Score:2)
Could you please show me a reasonable experiment with proper statistics supporting this claim?
Well to be honest - we have to confess something to you. Slashdot was once setup as a psychological experiment.... Just take a look at the users and comments, no more proof needed!
Re: (Score:3)
1. Money
2. Glory
Money is probably the biggest factor, there just isn't enough money allotted to trying to reproduce experiments. Most budgets only exist for new/continuing research, not verifying experiments done by others. And as the cost of doing experiments rises(more sophisticated equipment necessary, lots of paid "volunteers" etc) this is only going to get worse.
Second, although not as important, is
Re: (Score:1)
There's no shortage of money. From http://www.bis.org/publ/otc_hy1311.pdf [bis.org] :
In the face of so much global liquidity, it's ridiculous to say there's not enough money for food stamps, or science, or free health care.
Re: (Score:1)
For example, I recently traded some stock options. The actual value was around $600 at the time of the sale. The notional amount, which happens to be the stock's price trigger at which the options activate times the number of options, was $13,000. You do the math.
Re: (Score:1)
Even if you reduce it by an order of magnitude, that's still $60 trillion. And estimates of the shadow banking system [slashdot.org] are $100 trillion.
There's no shortage of money. The only scarcity is the lack of political will to fund basic social services and science. There is no economic or physical necessity preventing us from funding food stamps, science research, health care, a basic income.
Re: (Score:2)
Even if you reduce it by an order of magnitude, that's still $60 trillion.
And if you reduce it by two orders of magnitude, that's still $6 trillion.
There's no shortage of money. The only scarcity is the lack of political will to fund basic social services and science. There is no economic or physical necessity preventing us from funding food stamps, science research, health care, a basic income.
I agree that's there's no "shortage of money". But these things which you mention do not have that much value to them. For example, most people earn enough to pay for their own food, health care, and still have a basic income.
Also, efforts to insure that these services are available in turn harms the providing of these services. A lot of effort has been made to turn corn inefficiently into a gasoline additive. The motives are mostl
Re: (Score:2)
No, create more. Let the Fed fund the govt by buying T-bills. Since the Fed returns the interest to the Treasury, the borrowing cost is zero.
Give Republicans low (or no) taxes, but in return have them eliminate the debt ceiling laws.
Re: (Score:2)
You forgot 'Politics'. Not sure where that fits in the ordering though.
Re: (Score:2)
If only Psychology was a science.
The sad thing is that it's on par with the level of TV Tropes. They first use their pattern matching brains to notice some pattern, then go seek out quantifications for it. That's ridiculous. That's why literally everything is a trope -- even the tropeless story is a trope. The same goes for psychological classifications and categorizations of behaviours. Some psychologists claim to study cognitive bias -- Yet their own confirmation bias has blinded them to the fact that their distinctions themselves w
Re: (Score:3)
Science is a process, not a field of study or a result.
That process can be applied to anything where you want to find a fact.
Re: (Score:2)
That is a very disingenuous statement. While I would agree that there are many aspects of Psychology/Psychiatry that are not very scientific, there are some areas that are rigorous. Neuropsychology can be a good example. Testing and measurement is another good example. There is a lot more to Psychology than the hokey therapy that we think of.
Re: (Score:2)
How is this rated Funny? Psychology is not a Science.
Re: (Score:2)
And there you have it. Getting chemistry to work in the lab is hard enough. Experimental psychology results would never stand up to the same scrutiny.
Re:Psychology (Score:5, Funny)
If only Psychology was a science.
Lol -- psychiatrists and psychologist doing experiments. It's is a weak science at its best; a modern day priesthood at its worst.
For your heresy and disbelief you have been diagnosed as a paranoid schizophrenic!
We will monitor you to see if medication will be adequate to silence you, I mean, control your symptoms...
Re: (Score:1)
it was is easy to reproduce , they just made it up the same way they made up the the first result. psychology you'd have to be out of your mind to believe it.
Re: (Score:2)
For your heresy and disbelief you have been diagnosed as a paranoid schizophrenic!
Go straight to asylum; don't pass go, don't receive telepathic messages.
Re: (Score:1)
Re: (Score:3, Funny)
Angry.
Mostly angry.
Re: (Score:1)
How do you feel
I see what you did there.
Re:Psychology (Score:5, Interesting)
Psychology is a huge field. Perception, experimental analysis of animal behaviour, clinical psychology, cognitive biases etc. etc. (Note that only one of those involves psychiatrists.) Some bits allow for harder science than other bits.
I personally don't know enough about psychiatry to form a judgement on how scientific they are, but unlike you, at least I know what a psychologist is (or something of the range that they could be.) Your trite dismissal says much about your ignorance and nothing about psychology.
Re:Psychology (Score:5, Interesting)
Psychology is a soft science because of the numerous variables that in studies are often simplified into a constant often for simplicity's sake and nothing else. Economics and politics are the same, mostly because they're based on psychology.
It's an inexact science because the human condition is imperfect. As opposed to the hard sciences, which are exact, because the universe around us is "perfect". And then, there's computer science, which is a mathematical, computational science that's absolute. It's not even "perfect" anymore; it's exactly what the maths say it is, and any failure sits between keyboard and chair.
Anyway, psychology is important, because the only way to truly understand the imperfect conditions of humans is via an inexact science. And it's something only fully understood by humans (computers can simulate the hard sciences to a calculable degree of accuracy, but they'll never be able to simulate the soft sciences in the same way), and innately at that.
The way to think about psychology is using fractals. X% | X is > statistical significance, of the population behaves in manner a. X * (100-X)% of the population behaves in manner b. X * (100 - X * (100-X))% of the population behaves in manner c. Etc. a, b, c, etc. are up to you to figure out. And when you change the test, the individual that falls into one category is not guaranteed to fall into the same category again.
Note that the human mind can comprehend infinity (poorly for most, but very possible for a few), both countable and uncountable variants, but a computer will never be able to calculate it. So the fractal analogy works really, really well.
Re: Psychology (Score:1)
The universe only appears to be perfect to macroscopic viewers, because the time dimensions are held comparatively constant due to between small particles.
In other words, and electron -- which may well "see" 1 spatial dimension and three time dimensions, appears to behave statistically, not predictably. Therefore, quantum mechanics behaves like an imperfect science, as you call it.
That said, psychology, economics, and sociology attempt to make their science more perfect, often, by binding large numbers of s
Ironically... (Score:2)
Psychology is a soft science because of the numerous variables that in studies are often simplified into a constant often for simplicity's sake and nothing else. Economics and politics are the same, mostly because they're based on psychology.
It's an inexact science because the human condition is imperfect. As opposed to the hard sciences, which are exact, because the universe around us is "perfect". And then, there's computer science, which is a mathematical, computational science that's absolute. It's not even "perfect" anymore; it's exactly what the maths say it is, and any failure sits between keyboard and chair.
Anyway, psychology is important, because the only way to truly understand the imperfect conditions of humans is via an inexact science. And it's something only fully understood by humans (computers can simulate the hard sciences to a calculable degree of accuracy, but they'll never be able to simulate the soft sciences in the same way), and innately at that.
The way to think about psychology is using fractals. X% | X is > statistical significance, of the population behaves in manner a. X * (100-X)% of the population behaves in manner b. X * (100 - X * (100-X))% of the population behaves in manner c. Etc. a, b, c, etc. are up to you to figure out. And when you change the test, the individual that falls into one category is not guaranteed to fall into the same category again.
Note that the human mind can comprehend infinity (poorly for most, but very possible for a few), both countable and uncountable variants, but a computer will never be able to calculate it. So the fractal analogy works really, really well.
Ironically, the same things that make psychology a soft science are the same things used by theoretical physicists. Both rely heavily on probability versus observable inputs (although for different reasons). So, I would posit that what makes psychology and the others you mentioned a soft science versus a hard science is not aobut exactitude, but semantics. That and the fact that it is was the hard sciences that created the definition in the first place.
Re: (Score:3)
Psychology is a huge field. Perception, experimental analysis of animal behaviour, clinical psychology, cognitive biases etc. etc.
No, the field you're thinking of is Neuroscience and Cybernetics -- These have evidence based on observation and models which have predictive power. Psychology is just confirmation bias. [wikipedia.org] You must prove the null hypothesis more implausible than the original hypothesis, yet Psychology does not do this. For every ridiculous Sexual Epistemology, there's an equally valid Scatological Epistemology.
The truth is that neurons fire in brains, and that complexity gives rise to emergent behaviours. Leaping the gulf
Re: (Score:3)
Psychology is a huge field. Perception, experimental analysis of animal behaviour, clinical psychology, cognitive biases etc. etc.
No, the field you're thinking of is Neuroscience and Cybernetics -- These have evidence based on observation and models which have predictive power. Psychology is just confirmation bias. [wikipedia.org] You must prove the null hypothesis more implausible than the original hypothesis, yet Psychology does not do this. For every ridiculous Sexual Epistemology, there's an equally valid Scatological Epistemology.
The truth is that neurons fire in brains, and that complexity gives rise to emergent behaviours. Leaping the gulf in understanding to arrive at the explanations that Psychology and Philosophy give is akin to claiming a God in a Chariot pulls the Sun across the sky.
Your argument is only valid for recent times. For most of the history of modern psychology, there was not a separate field between neuroscience and psychology. Often in med school today, the psych departments and the neurology departments have been combined because we have found that the two are interrelated.
Since most psychological research deals with evidence based observation and models which have predictive power, what distinguishes it from the subclassification of neuroscience? It's a little bit like
An apt analogy (Score:2)
Good post Woodhams, I'll use an analogy I formed when discussing Psychology with my girlfriend whose been in the field a while: Psychology today is like studying Chemistry in the bronze age. Back then, they didn't have the means to understand the why of this chemical working with this chemical, they just knew it worked and did Chemistry via trial and error and guessing. Today, psychology is classifying things based on relations and forming best practices, but we don't understand why things are the way the
Re: (Score:2)
Good post Woodhams, I'll use an analogy I formed when discussing Psychology with my girlfriend whose been in the field a while: Psychology today is like studying Chemistry in the bronze age. Back then, they didn't have the means to understand the why of this chemical working with this chemical, they just knew it worked and did Chemistry via trial and error and guessing. Today, psychology is classifying things based on relations and forming best practices, but we don't understand why things are the way they are because of our limited understanding of the brain.
Maybe things will change in 100 years, maybe not. I think the field is worth its weight in gold though, there's a lot of good that can be/is being done and a lot of progress still to be made.
That is an extremely narrow view of psychology today and pretty much views it in terms of therapy. Let me ask you this, when Warren Buffet invests in the market using a contrarian strategy, are you stating that there is no underlying science backing him up? I ask, because he and many others seem to be quite successful at it.
Real psychology has a lot more depth than the therapist's couch. Should the determination of what is science be based on if it can fulfill the requirements of the scientific method vers
Not bad at all (Score:5, Interesting)
Re:Not bad at all (Score:4, Insightful)
The fact that people are trying to reproduce the experiments is good news in and of itself.
Re: (Score:2)
Re:Not bad at all (Score:5, Insightful)
If the scientific community valued reproducibility as much as original work, we would solve 2 problems:
1) Science without confirmation can lead us astray for years.
2) There are plenty of scientists who a great at experimentation but lousy at coming up with new ideas, and these scientists (or potential scientists) may not be finding their full potential.
And while we're at it, let's value failed experiments as much as successful experiments.
Re: (Score:2)
1) Science without confirmation can lead us astray for years.
It's not science if it's not tested. It's just making assumptions.
It's also not science if you just take their word for it when they said they tested it. It might have been originally, but not any more.
Re: (Score:2)
Okay, but don't confuse a failed experiment with a successful experiment that returned negative results.
Re:Not bad at all (Score:5, Interesting)
[The studies chosen for reproduction] included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey
At least some of these were fairly important research, which ideally would have been verified more than once. That there was any doubt that they would be reproducible is worrisome in itself.
Re: (Score:2)
Re: (Score:3)
This is very important, as they did not randomly pick studies but rather chose the ones they "Deemed Worthy". As they did not want to be proven bad scientists (I assume), their conscious or unconscious bias will have been towards sound or easy studies.
Re:Not bad at all (Score:5, Informative)
Did you read TFA? Or did you choose sentences to read randomly? Those we're quoted as the results that worked. In fact, here is the original paragraph:
Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested.
Two that didn't were about social priming, one was currency priming, in which participants supported what I assume is the current state of capitalism after seeing money, and the other, priming feelings of patriotism with a flag. Moreover, both original authors we're positive about it:
Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect”, given the “vastly larger and more diverse sample” of the Many Labs project. Both researchers praised the initiative.
There you go, quoting the article directly since you can't be bothered to read it. It is true that they apparently chose what some consider to be important effects and the evidence against social priming is upsetting to some. Still, the fact that verification actually happened and people are happy about it shows science is alive and kicking.
Anyway, another cool thing about this study should be that it uses this thing, the open science framework [openscienceframework.org] which I haven't heard about until today, but seems pretty cool.
Re: (Score:2)
Clearly (Score:2, Funny)
Scientists must have *more sex* if they are to reproduce... :P
It's hopeless. (Score:2)
Science has a reproducibility problem? (Score:3)
I'll believe it when I see it.
Re: (Score:1)
And then see it again.
The problem isn't necessarily reproducibility (Score:5, Interesting)
The "problem" with experiments that aren't reproducible may not be with the experiments as much as with the popular media that decides to make sweeping generalizations based on one result. Though I guess some blame definitely needs to be applied to the researcher who allows unverified results to be misrepresented to get that 15 minutes of fame in a quote in The Guardian or USA Today...
Re: (Score:2)
The whole point of an experiment is to run enough trials to gain statistical confidence. It's supposed to be it's own validation in that sense.
So it's either a systematic error in their experiment, or fraud.
Re: (Score:2)
Reading works like that, and others from the same authors and institutes, we got the feeling that it was mostly a delusion that they were doing science. Like how when a 4-y
Re: (Score:2)
there's a delusion that it's actually doing the washing up, and you really don't want to spoil its fun by breaking that illusion - it's not doing any harm, is it?
The harm comes when they "dry" the dishes with the dish towel, getting it all nasty and dirty, and then put the dishes away in the cupboard still dirty, contaminating the other dishes they come into contact with. There's a direct analogy to be made here.
Re: (Score:2)
But don't be quick to assume it's fraud, because it's very, very easy to make a systematic error in designing an experiment.
Re: (Score:2, Funny)
Re: (Score:1)
That's not really a problem from the perspective of scientists - in the fields of psychology, cog sci, and neuroscience, I've never encountered an instance of a researcher using any popular media distillation of some study as a meaningful source of info on that study (aside from making them aware of the study's existence).
Also, you seem to be assigning some a priori status of reproducibility or lack thereof to some studies, which really confuses the issue. For example, what does it mean for an experiment to
Re: (Score:2)
For example, what does it mean for an experiment to be "not reproducible"?
It means that, as a scientist, you have failed to produce a worthwhile experiment. Hypothesis is first. A road map for reproducibility is next. If said map does not function, you have failed.
Re: (Score:1)
I guess I should clarify the larger epistemic point at which I was hinting. That others may not, in some reasonable number of attempts, reproduce an experiment does not mean that the experiment is categorically not reproducible. Any number of things, such as lab conditions (which are not, in practical, absolute terms, reproducible within a lab, much less between different labs), can influence the results of experiments, and while adhering to certain sound methodological principles abstracts away a lot of th
Re: (Score:2)
Isn't the ability to reproduce results based on the "idea" in the methods section central to the concept of scientific "reproducibility"? If I claim I applied one set of methods and got a certain result --- but those methods are different from the physical reality of my setup that reasonable adherence to the stated methods will produce a vastly different result --- then I have failed at publishing a "reproducible" experiment.
Example:
"By the method of releasing lead spheres at rest into the air, I have obser
Re: (Score:2)
Sorry, I have no idea how Slashdot managed to completely mangle my post, cutting out a big chunk in the middle, after previewing and submitting. I don't have the patience to re-write it, so just ignore the garbled mess left after Slashdot's unexpected redaction of a whole middle paragraph.
Re: (Score:1)
Don't worry about the post getting cut off - the point you were trying to make is clear. To address the idea that "the methods are different from the physical reality of my setup" - it's simply not possible to be comprehensive in the instructions you provide when you write up your methods, and furthermore, in reality, variables are often introduced in a lab which impact experimental results, but which are not accounted for in the methods writeup because the authors are not aware of these variables (this is
Re: (Score:2)
No, I still think that, if the methods are insufficiently described (do not sufficiently capture the nuances of the physical experiment) to permit reliable replication of results, then this makes an experiment un-reproducible by definition.
Indeed, allowances must be made for the study of ephemeral phenomena --- people's perceptions in some particular time and place; a comet passing by once --- that preclude actual recreation of the experiment. So, I think there is a "methodological description" requirement
Re: (Score:2)
You make absolutely no sense here. If I follow the original researcher's notes and experiment design correctly and cannot obtain his result, the experiment was not reproducible. That's what "not reproducible" *mean*--I couldn't reproduce his results.
Re: (Score:2)
That's not really a problem from the perspective of scientists
Most people here are not looking it from the perspective of scientists, though, but from the (as the article states) "much publicized" perspective.
For example, what does it mean for an experiment to be "not reproducible"?
Exactly. You should be asking that question about the article, not to me :) In fact I do have a neuroscience background, even if that's not what I am doing these days... but that is pretty much irrelevant to this thread, which was to start a conversation about how the mainstream media tends to latch onto any unverified/reproduced study and report it as the cano
Re: (Score:3)
If you're going to pick a paper then The Guardian was not a good choice, given Dr Ben Goldacre writes a regular column for the called "Bad Science" where he critiques terrible science reporting in the media (amongst other things).
Re: (Score:2)
But unfortunately not all of his colleagues share his standards for scientific repairing - there have been plenty of horribly reported scientific studies in the Guardian, as well as almost all "popular media" sources that just can't help it...
I read the headline... (Score:4, Funny)
...and thought, "Now there's a ray of sunshine for Slashdotters."
As a group, that is.
Re: (Score:2)
Funny, because there's another study recently published that said technology is killing everyone's sex lives. It's probably old news here, as it's just a downward extrapolation of the extreme case found here.
I thought at first... (Score:2)
I thought at first it was saying 36 groups each tried to reproduce the results of 13 experiments, and all 36 were successful with 10 of 13 (though not necessarily the same ten), successfully reproducing the results of a reproducibility meta-experiment.
Eh (Score:2)
So...they couldn't reproduce the reproducibility problem...?
"not been able to reproduce" (Score:2)
Were they not able to reproduce the outcome of an experiment, or were they not able to reproduce the whole experiment (as in "We assume that during a total eclipse in the month of may" or... "to reproduce this, take any old Large Hadron Collider lying around...")
Re: (Score:2)
Regarding the failures:
"Of the 13 effects under scrutiny in the latest investigation, one was only weakly supported, and two were not replicated at all. Both irreproducible effects involved social priming. In one of these, people had increased their endorsement of a current social system after being exposed to money[3]. In the other, Americans had espoused more-conservative values after seeing a US flag[4]."
[3
The Decline Effect (Score:1)
One of my favorite experiments (Score:2)
I have a particular experiment which, in my mind, highlights an aspect of human nature we would all prefer to deny. We all have within us the capacity to ruthlessly abuse others, even and especially friends and family when given the opportunity. This famous experiment [wikipedia.org] a group of peers where some were assigned the role of prison guard and others that of prisoner. It really didn't take long before things went really bad.
I have often heard that corruption is a problem of opportunity more than of character.
Hold on a second (Score:2)
"Science has a much publicized reproducibility problem" links to an article that says "as many as 17â"25% of such findings are probably false", but a group reproducing 10 out of 13 experiments (23% not replicable) is striking a blow for reproducibility?
this is really great (Score:2)
Reproducing 10 out of 13 experiments is really quite good.
I'm a materials physicist, and if I could reproduce that ratio of experiments in my field, I would be very happy and a little bit surprised.
This problem of non-reproducible results in science is due to poor training, poor writing, poor experiment design and a direct link between citations and funding.
That's 10/10 if you leave out those 3 bad ones! (Score:2)
You come up with some ridiculous simple experiment, like giving people the same amount of money for a simple boring task, or a complex creative task.
Turns out most people go for the simple boring one.
Study Conclusion: People prefer simple boring tasks!
My Conclusions: why take more risk in screwing up when you can make the same money with something easy?
Who says these researchers didn't select experiments they a-priori thought seemed reasonable?