Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science News

Most Science Studies Tainted by Sloppy Analysis 252

mlimber writes "The Wall Street Journal has a sobering piece describing the research of medical scholar John Ioannidis, who showed that in many peer-reviewed research papers 'most published research findings are wrong.' The article continues: 'These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. [...] To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.'"
This discussion has been archived. No new comments can be posted.

Most Science Studies Tainted by Sloppy Analysis

Comments Filter:
  • by brteag00 ( 987351 ) on Tuesday September 18, 2007 @12:40PM (#20654605)
    It's not just medical research. The scientific community works like any other community: the greater the implications, the greater the scrutiny, attempts to replicate, etc. The Huang embryonic stem cell study is a great case-in-point: the image-manipulation fraud was uncovered because of the vast number of researchers looking at the micrographs he published. (That sounds familiar, doesn't it: "Many eyes make all bugs shallow.") Global warming has many, many people working on models, taking ice cores, doing other analysis. Of course, the vast majority of published research isn't reported in Science [sciencemag.org] or Nature [nature.com], and so it doesn't get as much exposure. That's why around here (the University of Wisconsin), it's standard practice that if your work depends on someone else's result, you first replicate her experiment and make sure you get the same result. (If you can't, you write a letter to the appropriate publication making note of your inability to replicate the result.) This means that eventually the mistake gets uncovered, and your research doesn't get burned because someone else has been sloppy.
  • Re:Yup. (Score:2, Interesting)

    by AKAImBatman ( 238306 ) <akaimbatman@gmaYEATSil.com minus poet> on Tuesday September 18, 2007 @12:41PM (#20654625) Homepage Journal

    Must be how the global warming myth started.

    Actually, it is. According to the scientists who reported in "The Great Global Warming Swindle", the concept of CO2 warming was a fairly small area of research that wasn't taken very seriously (and actually seen as a benefit to combat Global Cooling!) until Margaret Thatcher decided to fund CO2 warming research. She was a big proponent of Nuclear Power and saw the CO2 warming research as another bullet point for the advantages of nuclear power. Once there was money on the table for the research, things went downhill from there.

    If you haven't seen the documentary, I highly recommend it. One of the key issues they point out was that Gore's graph didn't show a slight problem with CO2 warming theory due to its scale of several million years. In specific, in the ice core samples used to create the data CO2 levels rose about 800 years AFTER the temperature rose. The reason? Because the ocean can hold less CO2 when it is warmed. (Which takes hundreds of years to cause an appreciable temperature change in a water body that large.) So it stops being a carbon sink and actually rejects already dissolved CO2. When the temperatures cool, the CO2 levels drop a few hundred years later as the ocean is able to reabsorb the CO2.
  • Re:"Most science..." (Score:3, Interesting)

    by AJWM ( 19027 ) on Tuesday September 18, 2007 @12:50PM (#20654845) Homepage
    Doesn't surprise me. Most people who go into the "fuzzy" vs the "hard" sciences do so because they're not good at or don't like math. (Yeah, I know, broad generalization.)

    And as math goes, statistics can be pretty darn counterintuitive.

    (I speak from experience - I worked for a few years in a university computer center's "academic support group", where among other things I help faculty and grad students with running statistical analysis packages. Some of the experimental designs were pretty bad, too.)
  • Re:Yup. (Score:1, Interesting)

    by Anonymous Coward on Tuesday September 18, 2007 @12:56PM (#20654967)
    This "lag" is well understood [realclimate.org].
  • by call -151 ( 230520 ) * on Tuesday September 18, 2007 @12:56PM (#20654981) Homepage
    There are a lot of different attitudes about the role of the anonymous referee, in different fields and in different settings. In computer science and mathematics, where most of my publications are, the role of the referee depends upon a number of things. A few comments relevant to my disciplines:

    • The responsibility for correctness lies with the author, not the referee. It is good if the referee spots problems but it is not the obligation of the referee to certify that every last detail is correct.
    • Often, the primary responsibility of the referee is to comment on the importance, priority, relevance and how much interest there is in the work.
    • In the CS world of conference refereeing (as opposed to CS journals) there is often absurd time pressure. Articles/abstracts are due at midnight local time on some date, so things are typically hastily written, and referees must review things in a very short timeframe and practically never get a chance to check things carefully. As far as I am concerned, the conference publication model in CS is terribly broken. There have been some calls for reform, but those have been coming for at least the last 10 years or so and over that period it's gotten worse, not better.
    • In math, it can take a year for a referee to work through something techical, so the process is slow.
    • Typically, referees are uncompensated for their work. Some people take their refereeing duties seriously, and some do not. Generally, those who do a good job in a timely fashion are asked more often to referee more things, which is not exactly a reward.

  • by brteag00 ( 987351 ) on Tuesday September 18, 2007 @01:04PM (#20655145)

    1) What is the usual failure rate for replication?

    2) Do the letters routinely get published?

    3) You just do that for work you're following up with experiments, not for everything you cite, right?

    Unfortunately, I'm not in a hypothesis-driven lab [wisc.edu], so I can't speak to any of these from direct experience. I know that I routinely see such letters published (frequently as "technical comments"), and I know that I go to seminars and routinely see people get raked over the coals for not having verified someone else's results. The only time you would do so, though, is for results on which your work depends directly.

    Of course, there are perfectly valid reasons why that validation might fail, reasons that have nothing to do with someone being sloppy or deceitful. For example, many oft-used cell lines mutate as they are cultured, so your flask of MCF-7 breast cancer cells might not behave the same as the MCF-7 cells used across campus.

  • by Anonymous Coward on Tuesday September 18, 2007 @01:06PM (#20655175)
    That might be a better title for this article. Journalists love to take findings from scientific literature and mangle, misconstrue, and misinterpret them. Let's see... a journalist finds a biologist who looks at a particular subfield of genetics, and somehow manages to blow that up into "most published research findings are wrong."

    Never mind that there are thousands of journals in enormously diverse fields - biology, chemistry, physics, all the branches of engineering. We can take a sample of 432 articles on a single subject, and extrapolate that to state that ALL scientific research is 'probably' wrong.

    Way to further undermine the value of science in the eyes of the layman, jackass.
  • by rumblin'rabbit ( 711865 ) on Tuesday September 18, 2007 @01:12PM (#20655327) Journal
    Universities are pumping out PhD's at a prodigious rate. As a manager of R&D, I've interviewed and hired more than my share. Virtually all say they want to do research.

    Here's my problem. Only a fraction (I'm guessing 1 out of 5) are actually capable of doing good research. The rest are competent employees for developing other people's research into useful products, but aren't terribly original thinkers, nor show a lot of initiative, nor show the rigour and clarity of thought one wants to see in a researcher.

    Frankly, when I "unleash" employees on open-ended problems without much guidance, the majority soon begin to flounder.

    There is nothing wrong with getting advanced degrees, but many then feel they are obliged to do original research when in fact they really aren't up to it. This may be one reason why the quality of papers isn't where it should be.
  • Re:Sensationalist... (Score:3, Interesting)

    by budgenator ( 254554 ) on Tuesday September 18, 2007 @01:43PM (#20655987) Journal
    In the Army they taught us that when doing an inspection and the paper work was written in two different colors of ink, and the last bit was in pencil, and the paper itself looked like it had been caught in the rain and folded up and carried in someones back pocket for three days its probably authentic so be suspicious if the paper work is too neat and clean. I see science the same way, if there is no arguments about the data or conclusions, everybody is talking about the majority and consensus, i again get suspicious I want to see a couple warts. It doesn't matter how impeccable the logic is when the premises are wrong.
  • Re:"Most science..." (Score:3, Interesting)

    by Otter ( 3800 ) on Tuesday September 18, 2007 @02:04PM (#20656389) Journal
    Most people who go into the "fuzzy" vs the "hard" sciences do so because they're not good at or don't like math.

    Actually you need more math for a PhD in psychology or sociology than you do for molecular biology, unless you're in a really math-heavy specialty.

    The biologists, being generally smart, can usually pick up what they need (math, programming, equipment building) on the fly. The problem is that, as you say, statistics frequently is counterintuitive.

  • Re:How is this news? (Score:3, Interesting)

    by Metasquares ( 555685 ) <{moc.derauqsatem} {ta} {todhsals}> on Tuesday September 18, 2007 @02:46PM (#20657231) Homepage
    I've come to the conclusion that most laypeople are incapable of reading scientific literature. The usual response when I show a paper to someone is "well, I almost understand the title". The solution, then, is to make the papers more accessible. But do that, and peer reviewers complain about the wording and "scholarly writing" (I tried). Given the choice of audiences, it makes more sense to side with other scientists, I'm afraid.

    If I had the patience, I'd write two versions of each one of my papers, but that's a lot of extra effort when most people won't bother to read them anyway.
  • by Mutatis Mutandis ( 921530 ) on Tuesday September 18, 2007 @03:12PM (#20657725)

    I have worked in a biotech / pharmaceutical environment for over five years now, and I don't trust the average medical researcher or biologist to accurately calculate the weight of a kilogram of stuff. I would say that their data analysis is habitually poor, if I were not convinced that it is actually habitually awful.

    I have been trying to change this for five years. My success in this has been such that it contributed strongly to my recent decision to start searching for another job. The reality is that biomedical researchers simply do not believe in doing mathematical analysis of data properly. They consider it an eccentric habit, forgivable but socially objectionable, like smoking. By common consensus, it is considered much too complicated to expect that any of them can be expected to understand. Your average biologist is innumerate to the nth degree, and proud of it.

    I blame their education, which seems to stress naive and antediluvian (excuse the word) analysis practices, if at all. I have seen course materials which in their expression of basic mathematical formulas, betrayed that they had been left unchanged since the days when people used slide rules and logarithmic tables for calculations. Most of their other training is strictly qualitatively, not quantitatively, and focussed more on memorizing that on understanding.

    If necessary, they will find a crutch to help themselves to stumble along: Find a paper that defines a formula that looks relevant, and then fill in the numbers. They would not bother doing their own analysis, or trying to understand how the calculation works or whether it is relevant at all. The notion that a good statistical analysis of mathematical modeling can actually contribute to the scientific understanding of an issue, is well beyond most of them.

    I am frankly, sick and tired of their attitude, and I still have to work with these people every day. And in my experience, my colleagues are actually better than most. I strongly suspect the WSJ is correct on this one.

  • Consensus (Score:3, Interesting)

    by huckamania ( 533052 ) on Tuesday September 18, 2007 @03:21PM (#20657905) Journal
    Before there is consensus on an issue, there is contention. Before contention there is no theory at all. Only at the the consensus phase can a majority of 'scientists' be correct. During the contention phase, there is the old theory that didn't include the new theory or which may in fact preclude the new theory. In both cases, if the new theory is correct, then the old theory was not. At least part of the pile of peer reviewed papers for the old theory can now be viewed as incorrect.

    What's interesting is when an old theory that was passed over can sometimes be resurrected and proven correct, often times to the consternation of the consensus.
  • by Rimbo ( 139781 ) <rimbosity@sbcgDE ... net minus distro> on Tuesday September 18, 2007 @04:25PM (#20659211) Homepage Journal

    The unwillingness of actual scientific communities to challenge the misapplication of their methods by unscientific ones has lead to a dilution of the authority of science as a whole. ...

    There's only one way to deal with Cargo Cult Scientists. You have to call them out. You have to show how flimsy and false their supposed science really is.


    I agree with you. There are two reasons why your method does not happen more often.

    The first is that failure in science is perceived to be a failure of reason. Almost all societies have entrenched anti-intellectual subcultures; this makes it hard enough to make legitimate science accepted in society. This is one thing that makes scientists hesitant to call out their peers' mistakes. The irony in this is that they and their peers become themselves part of the anti-intellectual subculture by perverting science in this way.*

    The second reason is that we are social animals. Our survival is based on forming good relationships with others. Scientists, like all others, succeed based largely on their abilities to be a good member of a team in their field, in their university/business and in their chosen departments. If you publicly "call out" a peer for being mistaken, you will offend him, and he will both become defensive and resentful of you. And as a recent Slashdot-sponsored study shows, making the statement "X is not true" frequently has the effect of reinforcing the statement "X is true."** In other words, calling someone out usually only serves to piss people off and reinforce the false statement, and it's bad for your career as a scientist as well.

    There are ways to correct people when they are mistaken, but the time-tested way that works [amazon.com] is to do so with the individual in private, to begin by pointing out where one is right, and to give them information so that they will come to the conclusion that they are wrong on their own. Unfortunately, such social skills are generally not taught in school.

    It's a tough problem, maybe unsolvable. But we can make things better by doing a better job individually and, as intellectuals and men of reason, choosing not to give up just because the lunatics are running the asylum.

    * To some extent, one can blame the decrease in intellectual approaches to religion for this. The belief that science must be intellectual and that religion cannot be polarizes both. For religion it's a self-fulfilling prophecy, as intellectuals both run away from and are booted out from churches. But for Science, it ironically has the same effect of eliminating proper principles of reason from it, rather than improving the level of discourse.

    ** The solution, then, is to make a statement of the form "Y is true," where Y is some positively-worded statement. For example, if it is 90 degrees fahrenheit, and someone says "It's cold outside," rather than correcting him with "It's not cold outside," make the statement "It's hot outside."
  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Tuesday September 18, 2007 @04:52PM (#20659671) Homepage Journal
    getting a paper published is the very first step in peer review, not the final word.
    So yes, this might be a problem, but when other peers review it the problems are likely to get pointed out.
    A peer review paper isn't a paper that HAS been peer reviewed, it one that is being peer reviewed.

    Yes, I know that war redundant, but people for get to all to often.

    Another reminder - Scientist live to disprove hypothesis and theories.
  • by Jerry ( 6400 ) on Tuesday September 18, 2007 @06:07PM (#20660825)
    That was the title of a NOVA film in 1998.

    `Abstract: This video examines the troubling question of scientific fraud: How prevalent is it? Who
    commits it? And what happens when the perpetrators are caught? Factors contributing to "bad science"
    include sloppy research, personal bias, lack of objectivity, "cooking and trimming", "publish or perish"
    pressure, and outright fraud. The limits of peer review and other quality control systems are discussed.'

    The results of the study determined that 48% of all published data was fraudulent. The data was trimmed, cooked or outright falsified. Some cases made famous by public exposure were analyzed.

    While recieving a lot of lip service from the establishment science, the two government researchers who made the report were reassigned to worthless tasks in isolated areas. One was sent to shuffle papers in Alaska, IIRC. So much for whistle blowers, even government whistle blowers.

    In the last 19 years it seems nothing has changed. Besides this latest report how can I tell? Simple. The news is filled with stories of drugs being recalled because they are more dangerous that the problems they are supposed to treat. How would they ever have gotten on the market in the first place if their FDA "studies" weren't rigged? And you don't wonder about the revolving door policy between Pharmaceutical employees and FDA employees? Corporate influence in research is as corrupting as Microsoft influence in ISO standards voting.

    What really burns me is that MUCH of our basic research is done at academic institutions by professors funded by government grants, i.e., tax payers. But, thanks to the best congress that money can buy (because most of them have been bought off) OUR research is "monetized" (sold to special interests) for pennies on the dollar. These interests then reap HUGE license profits for decades. To make matters worse, many of the "special interests" are the very academic researchers who were paid to do their work. Having discovered key facts, without reporting them, they resign academia and begin a corporation to capitalize on what we paid them to learn.

    IF we had a congress worth what they are paid there would be a law which prohibits recipients of gov grants, or their families relatives, or former business associates to personally benefit from what they learned using that grant money for a period of 15 years. Secondly, the ONLY corporations which should be allowed to receive IP licenses from the gov should be NON-PROFITS, whose board, management or employees cannot include the professor or his family or relatives.

    Another thing that this recent study shows is what the NOVA film revealed: Peer-review is worthless for vetting research. Replication is worthless for vetting research. Obviously, personal integrity is also a worthless indicator of research quality.

  • Re:The CO2 lag (Score:3, Interesting)

    by Maniakes ( 216039 ) on Tuesday September 18, 2007 @06:20PM (#20660967) Journal
    So why didn't the global temperature spiral out of control and turn Earth into a second Venus, the last time this vicious cycle got started?

    There are other feedback mechanisms that act in the other direction. Higher temperatures mean the air can hold more moisture, which leads to more clouds (reflects sunlight back into space), and more rainfall. More rain and more atmospheric CO2 leads to more plant growth, which sequesters the surplus CO2 into biomass and eventually counteracts the CO2 release from the oceans.
  • by Alcyoneus ( 1107533 ) on Tuesday September 18, 2007 @08:47PM (#20662461)
    I just read Feynman's speech on Cargo Cult Science. It's excellent. He's not talking about just pseudo-science, but the everyday practice of mainstream science.

    Here's a quote.

    All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on -- with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

    The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.

    He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

    Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using -- not what you think it's using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.

    I looked up the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running the rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn't discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic example of cargo cult science.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...