Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science News

Most Science Studies Tainted by Sloppy Analysis 252

mlimber writes "The Wall Street Journal has a sobering piece describing the research of medical scholar John Ioannidis, who showed that in many peer-reviewed research papers 'most published research findings are wrong.' The article continues: 'These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. [...] To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.'"
This discussion has been archived. No new comments can be posted.

Most Science Studies Tainted by Sloppy Analysis

Comments Filter:
  • Yup. (Score:3, Insightful)

    by Estanislao Martínez ( 203477 ) on Tuesday September 18, 2007 @12:24PM (#20654251) Homepage
    And they are routinely reported sensationalistically in the media, and most of you people who are reading this right now swallow it all hook and sinker.
  • as a phd student (Score:3, Insightful)

    by KeepQuiet ( 992584 ) on Tuesday September 18, 2007 @12:27PM (#20654295)
    all i can say is "duh!". Everybody, being under pressure of "you have to publish", publish whatever they can. Sad but true
  • Sensationalist... (Score:5, Insightful)

    by posterlogo ( 943853 ) on Tuesday September 18, 2007 @12:28PM (#20654317)
    It is way off the base to say that "most published research findings are wrong". It is often the case that data analysis and interpretation for particular aspects of a research project (like 1-2 figures in a 7 figure paper) are up for vigorous debate. The scientific community can, in the long run, converge on very robust ideas, and drop those that are flimsy. To misleadingly imply that most research is wrong, which is exactly what the post suggests, is just poor interpretation of flimsy data, ironically.
  • My experience (Score:3, Insightful)

    by Zelos ( 1050172 ) on Tuesday September 18, 2007 @12:31PM (#20654389)
    This was certainly true in my experience. When I did a review of mathematical methods in my area a while back, most papers had basic calculation errors, missing information that made reproducing the work difficult or impossible, and they all used carefully selected examples to show their work in the best light.
  • by Gallowglass ( 22346 ) on Tuesday September 18, 2007 @12:32PM (#20654417)

    And one of the first rules is, "Never take a single study as proof of anything! Wait till the results are replicated before you even think of moving to a conclusion."


    The major problem is really poor reporting on science research. The news media routinely blazon some **NEW * Scientific * Discovery!!!**. Then you read the story and somewhere around the 10th paragraph you might see that this is based on only one study - and oftentimes even before peer review.


    Every scientists knows this. It's a shame the public doesn't. They wouldn't worry so much.

  • "Most science..." (Score:5, Insightful)

    by Otter ( 3800 ) on Tuesday September 18, 2007 @12:34PM (#20654477) Journal
    In research published last month in the Journal of the American Medical Association, Dr. Ioannidis and his colleagues analyzed 432 published research claims concerning gender and genes.

    His work seems to focus on population genetics and epidemiology, which is notorious for having unreproducible claims due to a combination of uncorrected multiple testing, publication bias and statistical incompetence. This "gender and genes" is a perfect example: someone does a study, finds nothing, slices and dices the data until he gets p = 0.04 for females or Asians or smokers and publishes his breakthrough finding. I'd have been surprised if he hadn't found almost all of those to be wrong.

    If you look at more in-vitro molecular biology and biochemistry work, I doubt if nearly as high a percentage of it is clearly "wrong", although quite a bit of it is worthless.

  • by posterlogo ( 943853 ) on Tuesday September 18, 2007 @12:36PM (#20654509)
    Furthermore, this epidemiologist primarily studied medically-related publications, and in fact focused mostly on high-profile research that make broad claims, or relied heavily on statistics to support a conclusion. Many research publications at the cell/molecular level do not rely on subtle statistical comparisons to prove a point. This guy is singling out research that is based heavily on correlations (like people with x, y, z are more likely to get a, b, or c diseases). He is only an expert in his own field, and I don't think he is qualified to judge every level of scientific publication, but he certainly doesn't mind the media attention.
  • by Crispy Critters ( 226798 ) on Tuesday September 18, 2007 @12:42PM (#20654647)
    The sample sizes are often small in medical/health studies and human beings have a lot of extra variables that are hard to control. The best thing to do is to wait for the meta-study, where someone analyzes all the studies that relate to an effect. After many studies have been done, do they agree? Do they appear to have been well done? Adding the studies together creates a larger sample size and hopefully averages out some of the variation due to flaws in the method.
  • by posterlogo ( 943853 ) on Tuesday September 18, 2007 @12:43PM (#20654691)
    "We all know..." What are you basing this on??? As a postdoc, I've committed myself to a massive amount of work and I'm certainly not doing it for pay (which is meager), but a LITTLE amount of respect would be nice. I've published a few studies and it was incredibly hard work to do the kind of careful science that gets published. A small amount of scandals and people like you who swallow any sensationalist piece of news out there really cast things in an unfair light. I encourage you to read more scientific literature and actually try and understand how the scientific process works. Do you really think we live in the kind of technological age as we do in spite of "a good portion of all studies" being "bogus" or "based on nothing"? I find this incredibly insulting.
  • by cduck ( 9999 ) on Tuesday September 18, 2007 @12:46PM (#20654753)
    I am not a scientist.

    That being said, it's my understanding that most scientists work off of grants, and those grants fund novel research. Replicating results is of obvious importance in validating those results, but doing so seems at odds with the funding mechanisms that are the reality for what I would believe to be most researchers.

    Are researchers supposed to replicate the experiments of others in their spare time and on their own dime?

    (As rhetorical as that might have sounded, I actually welcome those with first-hand experience to respond to it)
  • by Otter ( 3800 ) on Tuesday September 18, 2007 @12:47PM (#20654777) Journal
    That's why around here (the University of Wisconsin), it's standard practice that if your work depends on someone else's result, you first replicate her experiment and make sure you get the same result. (If you can't, you write a letter to the appropriate publication making note of your inability to replicate the result.)

    Out of curiosity:

    1) What is the usual failure rate for replication?

    2) Do the letters routinely get published?

    3) You just do that for work you're following up with experiments, not for everything you cite, right?

  • It must be so... (Score:3, Insightful)

    by Actually, I do RTFA ( 1058596 ) on Tuesday September 18, 2007 @12:53PM (#20654919)

    "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."

    TFA

    Since the criterion is that the claim is published, someone had to find the study new and interesting. Most new ideas are going to be wrong, especially true the more significant it is. After all, how many crackpot theories were postulated between Newtonian and Relativistic physics? On the other hand, most things easily verifiable, etc, are too obvious to me considered new and interesting. Note, while I find this interesting, I did not come up with this idea. Some economists published a similar study over a year ago postulating this as a reason. Of course, it's probably wrong.

  • Sloppy analysis (Score:3, Insightful)

    by benhocking ( 724439 ) <benjaminhocking@NOsPam.yahoo.com> on Tuesday September 18, 2007 @12:54PM (#20654927) Homepage Journal
    So, is what you're basically saying is that this study was tainted by sloppy analysis?
  • by Anonymous Coward on Tuesday September 18, 2007 @01:00PM (#20655059)
    The study does not refer to science in general, only to medicine. Medicine is hardly a typical example of standard science.

    1) Physicians are mostly trained to be healers (practitioners) not scientists.
    2) Medical data is very inaccurate compared to the data in standard science (physics, chemistry, geology, etc)
    3) Dealing with inaccurate data requies advanced knowledge of matematical statistics and most medical doctors do not have a basic grasp of this field.
    4) Many MAJOR ERRORS in the medical literarure are due to the ignorance of basic principles of statistics.
    5) Sure ignorance in the field of statistics is only one cause of poor medical research; however it cannot be ignored.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday September 18, 2007 @01:08PM (#20655221)
    Comment removed based on user account deletion
  • People need to realise that a lot of those calling themselves scientists are not really scientists at all. They don't apply the scientific method. They massage data regularly. They misapply statistics constantly. They don't subject their theories to falisfiability [wikipedia.org]. They waffle, hand wave, engage in rhetoric, and generally do just about everything except an honest to goodness, old fashioned solid, scientific experiment.

    Feynman spotted them over 30 years ago. He called them Cargo Cult Scientists [wikipedia.org]. They put on the appearance of science, but have none of its substance. They give a good performance, like an actor playing a scientists on TV. They wear the clothes, speak the language, seemingly apply the methods. But it's all empty. There's no rigor. There's no insight. There's no real testing going on. It's all just people waving around graphs, and lines, and their qualifications, and formulae they don't understand, to support the theories they want to be true, regardless of whether they are true or not.

    It's because in this day and age, you can't be a witchdoctor. You can't appeal to spirits, or gods, or karma, or any of the other philosophical reason thrown up in past ages. We live in "The Age of Reason", and people expect things to be proven to them "scientifically". So all the people who in the past would have risen high by browbeating, appealing to authority and writing great prose, are forced to dress themselves up in white coats and go through the motions of an experiment before they proclaim their great revelations to the world. The experiments however, are just as empty as all the old techniques, and bear only superficial relation to actual science.

    Personally, I think it's gotten worse over the last 30 years. The unwillingness of actual scientific communities to challenge the misapplication of their methods by unscientific ones has lead to a dilution of the authority of science as a whole. Under the current regime any half baked psychiatrists can show pictures to 20 undergraduates, record a few squiggles on an MRI, run the numbers through R over and over until he gets what he wants, and proclaim to the world just about whatever he likes, and still be called a scientist! No wonder it's all too easy for the Intelligent Design movement to pose as "real science". Just look at how low the threshold for real science is.

    There's only one way to deal with Cargo Cult Scientists. You have to call them out. You have to show how flimsy and false their supposed science really is. You also need to learn all the old rhetorical techniques, because faced with someone who actually knows what they're doing, the Cargo Culter will fall back to very old and time honored methods which enable him to win from a weak or false position. I think the real scientific community owes it to itself to show up these charlatans for what they really are, Con men. If they don't, science will just become more diluted in the long run until the public regards it in the same way it regards homeopathy.
  • Re:Yup. (Score:2, Insightful)

    by mrseth ( 69273 ) on Tuesday September 18, 2007 @01:19PM (#20655475) Homepage
    You really believe this? This is completely counter to everything I've experienced as a scientist. People become scientists because they love science not for the monetary rewards. If you want to be a rich climate scientist, all you need to do is get a job at the American Petroleum Institute. I am an "enviro" and could quite give a shit less in general about how you live your life, but yes, I will care what you do if it involves dumping dioxin into my local river. If that makes me a "nut" then I don't want to be sane.
  • by Anonymous Coward on Tuesday September 18, 2007 @01:23PM (#20655559)
    I am a scientist (polymeric materials).

    Are researchers supposed to replicate the experiments of others in their spare time and on their own dime?
    You are correct that no grant money is specifically allocated for reproducing other's results, nor for generating "null results" (showing that somethings isn't the case, e.g. that a particular methodology *won't* work). This is a problem, because it means that some important findings that are "uninteresting" don't get studied (or, worse, the data exists but never gets published).

    However it's not as bad as it initially sounds. Although we don't receive direct funding to reproduce results, that kind of thing frequently (but not always) happens anyway. If you're building on someone else's work, you will inevitably reproduce some of their experiments. Another example would be that you measure the same thing as someone else, but using a different technique. This kind of corroboration is sometimes even better than directly reproducing their method, because it shows that you can arrive at the same conclusion from a variety of techniques.

    So, truthfully, any of the important (and certainly all of the amazing/surprising) results do end up being reproduced in one way or another. But, we never write grant proposals that are simply "we aim to reproduce the work done by X" ... rather we say "we intend to extend upon the work done by X by attempting Y."

    I wouldn't object to a tweak to the grant system that gave more recognition to reproducing results and obtaining null-results. But I don't view it as a huge shortcoming of science as it is currently practiced, since we scientists have enough discretion with how we design our experiments that we can put in various checks when they are required.
  • by m2943 ( 1140797 ) on Tuesday September 18, 2007 @01:24PM (#20655585)
    I think the majority of working scientists don't "know" this and won't believe it either.

    Personally, I think it's true. Between publish-or-perish, financial conflicts of interest, and various political and social movements trying to influence science, a lot of published science is worthless.

    However, there's a difference between believing that and actually showing that. Anybody wanting to fix that needs clear and convincing proof first.
  • by Anonymous Coward on Tuesday September 18, 2007 @01:33PM (#20655741)
    After all, studies show that most studies are wrong.

    Clever.

    The fact is, good science is hard work. In fact, it is damn hard work, requiring not only a supremely keen intellect but a very high tolerance for tedium, great attention to detail, and usually a big fat wad of cash. Also, it requires a profound lack of ego (and the ability to cope with failure and keep trying), given that a trememdous amount of effort could (and frequently does) wind up being completely discounted by a peer-review or another study.

    The endeavor of scientific research obviously provides us tremendous benefits, and is furthering the evolution of our species at a blindingly fast rate (depending on how you look at it, of course). It is very important, very hard, and very expensive.

    There are many, many people who would like to be scientists but really don't have the brain for it (as I stated above, it isn't just intelligence that matters). Unfortunately, a lot of them wind up doing research anyway, and they cause problems. Hopefully there are enough good scientists with enough funding to clean up their mess.
  • Re:Yup. (Score:3, Insightful)

    by BlueParrot ( 965239 ) on Tuesday September 18, 2007 @01:38PM (#20655863)
    Right, have a look at the article about this story. Does it deal with physics? chemistry? Climate science? Meteorology ? Biology? No? So... who is surprised that despite this study ( which is ironically a bit sloppy ) having nothing to do with the key elements of climate science, global warming is the example people jump at. I mean for the love of god... The study deals only with a small subset of scientific research, goes on to conclude that most papers contain errors, slashdot translates this into "most research is wrong" and the parent takes the opportunity to have a jab at Global Warming, citing a documentary which is not only sloppy and known to be fraudulent, it is also non-scientific. I would try to form a decent conclusion based on this, but according to the spirit of this thread I will just say: People suck! The above post proves it!
  • by NeutronCowboy ( 896098 ) on Tuesday September 18, 2007 @01:51PM (#20656123)
    Ceteris paribus: Latin for "all else being equal". In the day and age of google, it doesn't help with sounding mysterious and knowledgeable. Considering you repeatedly use the phrase instead of its perfectly valid English equivalent, I suspect you're more interested in demonstrating your grasp of latin instead of advancing the discussion. Though if that's your goal, I would have liked a post completely in latin. Tamen google mos non succurro vos.

    Just to be somewhat relevant - while there are negative feedback loops, there are also positive CO2/temperature feedback loops. Albedo comes to mind. Not only that, but a lot of the feedback mechanisms are known. The only question is "how much", not "how". The biggest reason people are concerned is that certain things (ice melt, albedo changes) are happening faster than expected, pointing to parameters that were set too conservatively.

    Finally, the "ceteris paribus impact" (Hah!) is perfectly well known. Increase CO2 in an atmospheric gas mix, and infrared absorption goes up. End of story.
  • by Goldsmith ( 561202 ) on Tuesday September 18, 2007 @01:57PM (#20656235)
    The Wall Street Journal headline is a tautology. (Note that he's not talking about scientific misconduct, only honest mistakes, incorrect analysis or experimental design which could be improved.)

    There are almost no areas of science we're "done" with. The most recent paper on a subject almost always points out where previous papers have gone wrong. Thus, the previous papers have some mistake such as a miscalculation, poor design or incomplete analysis. If you pick any paper published in a peer reviewed journal this month, there's a very high probability that at some point in the future it will be amended or improved by some other paper.

    What Ioannidis *has* shown in his recent reports is that in genetics, not enough people are publishing on the same subjects. There are not enough "other papers" out there to check on the previous ones. The result is that papers which in other fields would be recognized as needing improvement are instead treated as the final word.
  • by fasta ( 301231 ) on Tuesday September 18, 2007 @03:07PM (#20657637)
    Putting aside for a moment the question of whether Genetic Association Studies - the focus of the research paper - are representative of "Most Science", the article does not say the analysis is invariably sloppy, it says it is often mistaken. For genetic association studies, this is not surprising, since it is very difficult to publish a negative result. So, small studies that show a statistically significant relationship are published, but small studies showing no relationship are not. Then, when larger studies are done, the small studies that had the "significant" relationship because of a fortunate or unfortunate set of samples is not confirmed. Indeed, this is what the research article points out; if your threshold for statistical significance is 0.05, then you will report that a chance relationship is significant once in 20 experiments. But, if you can't publish the 19 negative experiments, then lots of chance results get published.

    But Dr. Ioannidis has a very narrow definition of science - he only includes statistical studies that use p 0.05 as a threshold for significance. There are, of course, lots of papers that do not show p-values - the purification of a protein, the determination of a genome sequence, the identification of a new fundamental particle. In many cases, p-values are not provided because they are not considered informative - something that happens when the p-value is much much much less than 0.05 (I like my p-values less than Avagadro's constant. With that p-valuep, I think most of my results are correct.)

    And, of course, the WSJ misses all of this. The point of the research paper is that you can do everything right, and still be mislead with marginal p-values (0.05). Not sloppy, just not significant enough. We could, of course, require more stringent values, but then we would miss the genuinely rare, but important results.

    As the research article points out, results that are reproducible are, in fact, quite likely to be correct. It is perhaps useful to distinguish between science as a paper and science as a process. Most results that stand up to scientific scrutiny over a period of years (that any one cares enough about to validate), are (probably) correct. In some disciplines, which rely heavily on modest thresholds for statistical significance, many results cannot be confirmed.
  • Exactly. (Score:2, Insightful)

    by Estanislao Martínez ( 203477 ) on Tuesday September 18, 2007 @03:23PM (#20657937) Homepage

    Must be how the global warming myth started.

    Exactly. Incidentally, this is also the way that the no-global-warming myth started, too.

  • by Bluesman ( 104513 ) on Tuesday September 18, 2007 @03:29PM (#20658035) Homepage
    "Statistical analysis isn't difficult"

    I'm not sure what depth of statistical analysis you're talking about, but I've found statistical analysis to be exceptionally difficult when applied to Electrical Engineering, and I know I'm not alone here -- it was by far the most difficult post-graduate class at my school.

    I can confidently say that nobody who's graduating with me has a complete grasp of all of the statistical tools we were taught. Enough to get by, yes, but most of the things are extremely counterintuitive and easily misused, and this is by people who are really good at math.

    I have to laugh when I hear statistics in the news now, because it's all such propaganda. Any time you hear "has been linked to" in an article you know you're about to wade through a steaming pile of bullshit.

  • Clarification (Score:2, Insightful)

    by mlimber ( 1149251 ) on Tuesday September 18, 2007 @05:46PM (#20660519)
    I supplied this story to /., and I'd like to note that the title of the WSJ article is somewhat misleading since the researcher was concerned with medical studies, not science in general. (That is not to say, however, that some of the same problems of non-replication and unintentional data manipulation don't exist in other disciplines. However, one cannot draw conclusions on those fields from this work.) Also the global warming tag, which has now been removed, was not added by me and was inappropriate for the same reason.

Term, holidays, term, holidays, till we leave school, and then work, work, work till we die. -- C.S. Lewis

Working...