Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Psychology's Replication Battle 172

An anonymous reader sends this excerpt from Slate: Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. ... Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable. ...Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate."
This discussion has been archived. No new comments can be posted.

Psychology's Replication Battle

Comments Filter:
  • by Tablizer ( 95088 ) on Sunday August 03, 2014 @05:33AM (#47592585) Journal

    Psychologists are up in arms

    Perhaps they need some therapy :-)

    a fundamental mistake: They assume that valuable science is limited to the "hard sciences."

    Software engineering has a similar problem. Things that are objective to measure, such as code volume (lines of code) are often only part of the picture. The psychology of developers (perception, etc.), especially during maintenance, plays a big role, but is difficult and expensive to objectively measure.

    Thus, arguments break out about whether to focus on parsimony or on "grokkability". Some will also argue that if your developers can't read parsimony-friendly code, they should be fired and replaced with those who can. This gets into tricky staffing issues as sometimes a developer is valued for their people skills or domain (industry) knowledge even if they are not so adept at "clever" code.

    Thus, the "my code style can beat up your style" fights involve both easy-to-measure "solid" metrics and very difficult-to-measure factors about staffing, side knowledge, people skills, corporate politics, economics, etc.

  • WTF? (Score:5, Insightful)

    by Oidhche ( 1244906 ) on Sunday August 03, 2014 @05:51AM (#47592619)

    it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable

    Duh. That's because an experiment that is not replicable has *no* value.

  • by jamesl ( 106902 ) on Sunday August 03, 2014 @06:13AM (#47592655)

    The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings ...

    This applies to all science, not just psychology.

  • by Anonymous Coward on Sunday August 03, 2014 @06:16AM (#47592665)

    When psychologists stop producing so many studies with obvious bias, subjective terminology, subjective conclusions, and stop arbitrarily coming to conclusions based on data flawed for those reasons, maybe it could be taken seriously. Obviously, replication is needed, too.

    But so many people are fooled by it. Want a study that says video games cause people to be aggressive? There's a psychology study for you, but there's also one for your opponents. And all of them are bad science.

  • by Anonymous Coward on Sunday August 03, 2014 @06:17AM (#47592667)

    Falling into the 'cult' category

  • by Anonymous Coward on Sunday August 03, 2014 @06:35AM (#47592685)

    No, and it shouldn't carry the same "science" label to start with. Make it "social studies" or whatever. To call it science, one tries to put it on the same level as real science, where the processes are completely different on numerous levels. It's an insult to real science. For example, when a scientist builds a collider to find a particle, and he finds one, he puts up the results so they can be verified by peers, and if the collective brainpower finds an error and puts it down, the process is considered a success. In the meantime soft "scientists" will not be verified by peers and separate studies will have to point out the results are not even replicable, and people will bitch about and defend their research and the funding of their research.

  • by awol ( 98751 ) on Sunday August 03, 2014 @06:49AM (#47592705) Journal

    "Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable."

    No, those of us that oppose the funding of this crap recognise that if you cannot replicate your "study" then it is not an experiment. If what you are doing cannot be proved (one way or the other) by experiment then IT IS NOT SCIENCE. I don't really care what it gets called and some of it may even be valuable for some values of valuable however the amount of dross that is produce by social researchers that try and call themselves scientists is truly extraordinary and a plague on our world.

  • by sjwt ( 161428 ) on Sunday August 03, 2014 @07:06AM (#47592761)

    Yup, like the recent one about men not being able to 'be alone with their own thouhgs' [washingtonpost.com]..

    That same data can also read 'Men, more willing to put up with pain' or 'Men, more curious and want to know what they may experience'

  • by Intrepid imaginaut ( 1970940 ) on Sunday August 03, 2014 @08:22AM (#47592959)

    The above comment is precisely why these "social sciences" need to be delegitimised and rubber-roomed until they can figure out the meaning of the phrase "scientific method". Grant them no authority in deciding government policy, massively defund them in academia, get them out of the courtrooms, and generally pillory them for the witchdoctors they are.

    If you have to ask why, you're part of the problem.

  • by hsthompson69 ( 1674722 ) on Sunday August 03, 2014 @08:51AM (#47593045)

    http://neurotheory.columbia.ed... [columbia.edu]

    "It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated."

    In the search of positive results, and p-hacking to get there, they're failing to demonstrate scientific integrity.

  • by Anonymous Coward on Sunday August 03, 2014 @12:50PM (#47594135)

    Here's my challenge to individuals such as yourself who denigrate psychological science:

    How would *you* study behavior?

    It's very easy to dismiss behavioral sciences when you're not trying to study behavior. It's a very complex, difficult topic. E.g., how do you define depression? How do you define psychosis? How do you determine whether or not early childhood interventions actually have an effect on adult outcomes?

    Maybe you would argue that behavior shouldn't be approached scientifically, but that's a cop-out and leaving human experience to philosophers.

    I'm sick of ignorant arm-chair narcissists denigrating psychology when they don't have the balls to admit they have no clue how to approach the subject because it's too hard for them to understand.

    I'm sorry for sounding harsh, but then so are the critical comments here.

    And no, neuroscience is not psychology. There's an extremely fuzzy boundary, and they overlap tremendously, but they're not the same. To find the neural substrates of depression, you have to be able to measure depression. So you either study behavior or you don't.

    Yes, there's a replication crisis in psychology, but it's the same in all of science--it's everywhere in the biomedical sciences (e.g., everyone here knows of these studies, such as the big scandal over stem cell research that was all fake). And you don't hear physics being called a sham because of all the kooks publishing their poorly thought-out theories on studies on arXiv.org.

    Get over yourself and start trying to solve the problems you belittle.

  • Re:WTF? (Score:4, Insightful)

    by ultranova ( 717540 ) on Sunday August 03, 2014 @05:29PM (#47595453)

    You have to specifically DO something to test your claim and NOT do other things for control for it to be an experiment.

    But in that case the word "experiment" has been defined so narrowly it's no longer the sole validator of scientific theory. For example, General Relativity predicted that light would be affected by Sun's gravitational field, which was later observed during a solar eclipse, which is a naturally occurring event.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...