Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Psychology's Replication Battle 172

An anonymous reader sends this excerpt from Slate: Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. ... Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable. ...Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate."
This discussion has been archived. No new comments can be posted.

Psychology's Replication Battle

Comments Filter:
  • Re:WTF? (Score:3, Interesting)

    by thesandtiger ( 819476 ) on Sunday August 03, 2014 @09:47AM (#47593257)

    There's different levels of replication.

    In physics, you can generally replicate an experiment vary precisely if you've got a handle on the factors that went into that experiment - control the environment, etc. You can have an almost perfect replication. Yay, science!

    In social psychology research you can't ever even approach that same level of control over the environment the experiment takes place in. The subject will be different - even if it's the same subject used in the first experiment, because people change over time/exposure. The interviewer will be different because people change over time. The dynamic between interviewer and subject will be different. The history of the subject will be different as will the history of the interviewer as will the place the interview is taking place, etc. etc. etc.

    The best such research can do is to either find that there is a tendency for x to happen in y circumstances, but it might not always be the case.

    And, actually, there is a fair amount of basic replication that goes on in many psychological studies; when I was in the field working on studies we would routinely include certain basic measures that had been used in tens of thousands of studies before and compare anticipated vs. actual outcomes.

    But even if it doesn't get replicated it actually has some value in that it would indicate that whatever the original experiment felt was a contributing factor to the main reported effect, a lack of easy replication under mostly similar circumstances indicates that that factor probably isn't as strong as hypothesized, and it cuts off a (probably) blind alley.

  • Re:WTF? (Score:4, Interesting)

    by sexconker ( 1179573 ) on Sunday August 03, 2014 @03:01PM (#47594867)

    None of those are experiments. Experiments test hypothesis. You have to specifically DO something to test your claim and NOT do other things for control for it to be an experiment.

  • by PPalmgren ( 1009823 ) on Monday August 04, 2014 @08:54AM (#47598805)

    There are plenty of good psychology experiments/case studies that produce a lot of really useful information and are repeatable (albeit over a very long period of time). The problem is there are also a lot of complete and utter ass psychology experiments. It is really really hard to produce a good study that provides useful results in soft sciences, and in cases of psychology, they take a very long time and sometimes a lot of money to complete. Yes, they have to account for a lot of variables and exclude them via statistical analysis, but the ones that do it right do it exceptionally well.

    I used to think negatively on those types of studies until I actually took the time to read one while helping my girlfriend with a paper. I was amazed at the level of detail and the amount of effort they took to isolate the results into meaningful data.

Thus spake the master programmer: "When a program is being tested, it is too late to make design changes." -- Geoffrey James, "The Tao of Programming"