Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Misconduct, Not Error, Is the Main Cause of Scientific Retractions 123

ananyo writes "One of the largest-ever studies of retractions has found that two-thirds of retracted life-sciences papers were stricken from the scientific record because of misconduct such as fraud or suspected fraud — and that journals sometimes soft-pedal the reason. The study contradicts the conventional view that most retractions of papers in scientific journals are triggered by unintentional errors. The survey examined all 2,047 articles in the PubMed database that had been marked as retracted by 3 May this year. But rather than taking journals' retraction notices at face value, as previous analyses have done, the study used secondary sources to pin down the reasons for retraction if the notices were incomplete or vague. The analysis revealed that fraud or suspected fraud was responsible for 43% of the retractions. Other types of misconduct — duplicate publication and plagiarism — accounted for 14% and 10% of retractions, respectively. Only 21% of the papers were retracted because of error (abstract)."
This discussion has been archived. No new comments can be posted.

Misconduct, Not Error, Is the Main Cause of Scientific Retractions

Comments Filter:
  • Publish or perish (Score:5, Interesting)

    by i kan reed ( 749298 ) on Tuesday October 02, 2012 @01:06PM (#41528503) Homepage Journal

    "Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

    • by spikenerd ( 642677 ) on Tuesday October 02, 2012 @01:21PM (#41528731)

      ...The academic community needs to find another metric for researcher quality other than papers published...

      such as?

      Number of citations? No, it would take a 30-year probationary period before the trend was reliable.
      Have experts evaluate your efforts? No, that would require extra effort on the part of expensive tenured experts.
      Roll some dice? Hmm, maybe that could work.

      • by Anonymous Coward

        Roll some dice? Hmm, maybe that could work.

        I have the sudden urge to make a D20 modern "Tenure of Educational Evil" (If you don't get it, look up Temple of Elemental Evil) and propose that any researcher must be able to take their level 1 researcher through the module to get any funding. (and funding based on how many objectives they complete along the way)

      • Have an annual fight to the death for tenure.

        As an added byproduct I bet A) Your average tenured professor would start to look a bit different, and B) You gotta bet they would be taking way less shit from students and TA's...

        Though seriously though, I know in some circles it has been discussed that not every university be structured in the same way. For the most part most/many are more less training centres rather than places of deep discovery.

        Tenure and papers, might make sense if your primary goal is the

      • by JWW ( 79176 )

        How about calculated gross earnings of the students you have taught?

        Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

        Or you could measure net income from licensing of IP for creation of new technology. Notice I said licensing and creation, I wouldn't want to encourage our universities to continue acting like asses pursuing IP lawsuits as a means to make money.

        • by interkin3tic ( 1469267 ) on Tuesday October 02, 2012 @03:54PM (#41530729)

          How about calculated gross earnings of the students you have taught? Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

          That would also make it in new professors' best interests to not teach the intro level courses where much of the class will change majors and doesn't want to be there in the first place. They'll instead focus on the upper level courses where the weak links have been weeded out.

          Guess which courses new faculty get stuck doing now? That would be rewarding the really weaselly ones who were able to skip the hard work.

          Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants. They're voting with their wallets for schools where research is valued more than instruction. So your solution is lacking a problem, at least according to the teachers and students of such schools.

          • by khallow ( 566160 )

            Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants.

            Why? The respective research school selling point is that the teaching is better because the faculty is top notch.

            • Why? The respective research school selling point is that the teaching is better because the faculty is top notch.

              Huh? Where did you get the idea that a good reseach faculty means a good teaching faculty? There's too much pressure on research faculty to do research to expect them to spend alot of their time concentrating on teaching. (Yeah, some research faculty are good teachers, but there is no causation.)

              You go to a reseach school if you want to get involved in research, because that's where the student jobs in research are. You go to a teaching school if you want to learn, because as an undergraduate you aren't g

              • by khallow ( 566160 )

                Where did you get the idea that a good reseach faculty means a good teaching faculty?

                Note the use of the phrase, "selling point" in my original post. Where does anyone get that impression that research means better teaching and/or better education? From research schools marketing that angle.

                Fundamentally, your assertion that "the students don't care about quality teachers" is based on flawed premises. Prospective college students aren't well known for understanding the nuances of a college and an education. So why expect them to "know" that "smaller schools known more for teaching" have

          • Most students are going to choose institutions where the certificate they get at the end will have the highest prestige attached. Now if certificates from universities with better teaching would provide higher prestige (which would make sense, because the students from there should be better educated, after all), then students would select the universities with the best teaching, and thus universities in turn would have a higher interest in improving their teaching in order to get more students.

        • But good teaching is not a good metric of quality of research. The goal of research is discovery rather than dissemination of information.
      • Why do we need to rate researcher quality in the first place? To label a scientist as first grade, second grade, third grade? Can't we just rate every single research instead, we've got a lot of example of (so called) mediocre researchers that had a brilliant idea later in their life, while many young promising scientists produced very little after a good start.
    • Re: (Score:3, Interesting)

      by raydobbs ( 99133 )

      With the public retreat from education, universities have to take their funding from more private sources. As a result, there is outside pressure to do research to favor these outside sources of funding, and you get a recipe for fraud and misconduct. Of course, the universities won't admit that they have had to make a deal with the devil to keep the doors open - and a large part of our (United States) political system is dead-set on taking us backward in terms of scientific progress to appease their less-

      • Re: (Score:2, Interesting)

        Well, you've got the devils on the corporate side, who may be trying to avoid bad press, say large organic potato farmers who don't wan to see studies that show the deleterious effects of carbohydrate intake on obesity, diabetes, heart disease and other chronic diseases. Fewer carbs sold means less profit to the company.

        But then, you've got the devils on the government side, who also may be trying to avoid bad press, say the USDA regulators who don't want to see studies that show the deleterious effects of

      • by khallow ( 566160 )

        With the public retreat from education, universities have to take their funding from more private sources.

        Last I checked, there was no such retreat from education. There's been a remarkable decline in the quality of education and what public funds buy. But it's a dangerous illusion to claim that there has been a retreat from education when the problem is elsewhere.

    • Re: (Score:2, Interesting)

      by jamesmusik ( 2629975 )
      Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.
      • by blueg3 ( 192743 ) on Tuesday October 02, 2012 @01:35PM (#41528911)

        Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.

        That's what you might think, but getting (most) journals to publish negative results is very difficult.

      • by Anonymous Coward on Tuesday October 02, 2012 @01:37PM (#41528939)

        Journals don't only publish papers reporting "positive results," whatever that may be

        HAHAHAHAHAHAHAHAhahahaha... ha... oh, oh I'm sorry, but that's funny.

        Yes. Journals have a very long history of not publishing 'negative results'. (id est: "We tested to see if X happened under situation Y, but no it doesn't.") Mostly because it's not 'interesting'.

        If you want a good example of this, check out the medical field, where the studies which don't pan out aren't published, the ones which do are, leading to severely misleading clinical data [ted.com], and it leads to problematic results.

        • Re: (Score:2, Interesting)

          The problem with biomed research is that the field is rife with people who don't understand models. Biomed research is not really science in that we are not yet at the point where we can express mathematical models to make predictions which are then falsified or not.

          All too often, it is a case of "I knock down/over-express a gene, find that it does something, and then make up some bullshit where I pretend it'll cure cancer". In many cases, articles get published because the reviewers don't say "this claim i

      • Re:Publish or perish (Score:5, Interesting)

        by js33 ( 1077193 ) on Tuesday October 02, 2012 @01:52PM (#41529141)

        A positive result is the rejection of a null hypothesis. In the frequentist statistical paradigm, a failure to reject the null hypothesis is simply not significant. Insignificant results are not usually considered worthy of publication. "If your study comes out a way you didn't expect," then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance. This way you can explain the significance of what you learned from the "failure" of your experiment, and there is no reason you should not be able to publish it.

        That's the statistical paradigm. Results just aren't significant unless you can state them in a positive way.

        • then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance.

          You have to be very careful here. In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

          • by js33 ( 1077193 )

            You have to be very careful here.

            Yes you do.

            In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

            There is also the danger of making an unjustified assumption of objectivity. In preliminary studies, scientists will have gathered data, analyzed it, looked for patterns, and tried to come up with all kinds of hypotheses that could be tested. Even the most final, definitively ob

            • often enough the scientist will have a pretty good idea of the expected nature of the data to be collected.

              Sure. I was clarifying that you can't just change to a different method of analysis according to the data you got, just because your original analysis didn't give you the result you expected.

              In other words, you simply can't change your plan once you've seen the data. In this lecture [lhup.edu], Feynman gives a very clear and real example of what can go wrong even if you're trying to be completely honest (look for the part where he talks about Millikan). That's a neat example because it's a very controlled experiment m

              • by Eskarel ( 565631 )

                I think the OP was referring more to the fact that if you make the hypothesis that if you give drug X to rats their hair will turn green and instead giving drug X to rats caused them to, with a reasonable degree of statistical correlation, grow a third ear instead, you might still have a paper. What you thought was the case was wrong, but you still got an interesting result.

                On the other hand the result "nothing happened" or "the rats all died" isn't necessarily all that interesting unless there was an exist

      • http://www.econtalk.org/archives/2012/09/nosek_on_truth.html [econtalk.org]

        Listen. Learn. Positive, sensational results get published. Negative results only get published if they contradict a well known study/result/notion. Of course there are exceptions, as always.
      • Positive results are interesting results. Lets say you prove that watermelons cause 80% of all cancers, that is headline and front page stealing news. The report detailing how you proved watermelons have no link to cancer is not.

        Similarly, your scientific paper on how to cure cancer is worth billions and extremely interesting news, while you paper on how NOT to cure cancer is neither.

    • It's also a great way to keep the grant money flowing in even after you have tenure, particularly if you're publishing findings that are likely to get you grants. And no one gets paid a nice bonus for finding inconclusive or negative results.

    • Re: (Score:3, Interesting)

      by scamper_22 ( 1073470 )

      It's not the academic community that is at fault. It is our society.

      I've long held the view that science only gained the credibility it has because it was free from politics and power.

      But since science has gained such credibility, people think we should now *trust* with power. Which of course destroys the very thing that gave it that trust. Ye old saying 'power corrupts and absolute power corrupts absolutely'.

      For one thing, we now have government funding for science. Sounds like a good idea... except of

    • "Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

      I think the issue is not that they need a new metric for researcher quality but to realize that not every professor needs to be an active researcher their whole career.

    • It's not just tenure, even getting a good faculty position is dependent on publication of research in high impact journals. I think that the major conflict of interest in my field, basic biomedical research, rather than funding by pharma etc, is the necessity to generate data suitable for publishing in big journal to get/keep jobs. I'm pretty sure this is going to get very messy if something isn't done to address the problem.
    • by Gerzel ( 240421 )

      Aye. The result isn't surprising at all. In fact it is one of the big reasons why science demands that results be reproducible and methods be published.

    • "Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

      This sounds an awful lot like something teachers told me in grade school:

      "Show me you're busy working on something or I'll send you to the office."

      Then, an awful lot like something I heard when working during the dotcom era:

      "We need to look like we're doing busy work or we'll all get canned."

  • by somaTh ( 1154199 ) on Tuesday October 02, 2012 @01:08PM (#41528535) Journal
    I've read the abstract and several stories that cite it, and I haven't seen some specific numbers that would make this story more relevant. They talk about the number of retractions being up sharply, and the number of those pulled for "misconduct" being up as well. The abstract and other sources have yet to put either number in relative terms. Of the number of papers published, is the percentage of those papers that are retracted up? Of those retracted, is the percentage of retractions due to misconduct up?
    • by ceoyoyo ( 59147 )

      That's not the point. The point is that journals need to be clearer about why a paper is retracted. Fraudsters shouldn't be able to hide behind the assumption that a vague retraction notice means someone made an honest error. The authors specifically state that they cannot make statements about the fraud rate because they don't have a good measure of the total number of papers published.

    • Before I judge this paper, I'll first wait some time whether it will get retracted because of misconduct. :-)

    • The first figure of the PNAS paper shows that less than 0.01% (maybe 0.008%) of all published papers are retracted for fraud or suspected fraud and it has been increasing since 1975 (maybe around 0.001%). The authors state that the current number is probably under-reporting because not all fraud is detected and retracted. It is possible that the 1975 numbers are less representative, since fraud might have been harder to detect (at least for duplicate publication and plagiarism).
    • This article [guardian.co.uk] has the title "Tenfold increase in scientific research papers retracted for fraud", but at least mentions some actual numbers:

      In addition, the long-term trend for misconduct was on the up: in 1976 there were only three retractions for misconduct out of 309,800 papers (0.00097%) whereas there were 83 retractions for misconduct out of 867,700 papers at a recent peak in 2007 (0.0096%).

      Percentage-wise, we're talking about a very small number of papers. They quote one of the authors:

      "The better the

  • by peter303 ( 12292 ) on Tuesday October 02, 2012 @01:12PM (#41528587)
    I dont know if more students cheat now than when I attended grade school in pre-internet days. But the ease and temptation with the web is greater now. Surveys I read suggest at least half of students cheat.
    The mystery has been how one progresses from a cheating culture in grade school, then lose it by the time you reach grad school and professorship. Apparently fewer dont escape this culture. Significant science will be attempted to be replicated and fraud discovered.
  • Just stupid (Score:3, Insightful)

    by UnresolvedExternal ( 665288 ) on Tuesday October 02, 2012 @01:13PM (#41528617) Journal
    What surprises me is that these scientists actually weigh the risk reward in favour of damn lies. Fifteen minutes of fame then a dead career.
    • Well, FWIW, if Hendrik Schön [wikipedia.org] hadn't gotten stupid and made some pretty massive (and physics-defying) claims in his paper, and stuck with semi-muddy results that looked pretty (as opposed to sexy), but were harder to replicate? His career would have likely lasted years, if not decades, before he got caught.

      It all depends, from the fraudster's point-of-view, whether he wants rockstar status, or to make a comfortable living...

      • Eh, even if he had made up realistic-looking data, there were a lot of other red flags: not saving raw data or samples, no one else making measurements, all other groups unable to reproduce results, etc. In retrospect, it sounds like it only went on that long because he was at a private lab, but I see what you mean.
  • by Anonymous Coward

    That would be too ironic.

  • by jabberwock ( 10206 ) on Tuesday October 02, 2012 @01:16PM (#41528659) Homepage
    I think that the article implicitly misrepresents the level of misconduct by leaving out some relevant statistics. ... More than 2,000 scientific articles, retracted! And ... fraud! ... plagiarism!

    In context -- PubMed has more than 22 million documents and accepts 500,000 a year, according to Wikipedia.

    So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Yep. That's a very important, and very *missing* bit of information. Even if *ALL* of the retracted articles were for *blatant* and *intentional* misconduct (not duplicate publication), and all of them were published in the same year, and all of them were in PubMed, that would be a whopping 0.4% fraud rate.

      It boggles my mind that this number wasn't asked for by the article's author.

      Well, it *should*, but instead I'm just getting more cynical and assuming either incompetence (the author is writing about so

      • by Anonymous Coward

        It boggles my mind that this number wasn't asked for by the article's author.

        Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

        • by khallow ( 566160 )

          It boggles my mind that this number wasn't asked for by the article's author.

          Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

          Here's a case in point. Someone does research on fraud in science and the first thing that you and the parent poster think, "What is the ulterior motive?" That's just another anti-scientific attitude.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      > So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.

      Those are only the ones that get discovered. I roll my eyes often when I read medical papers. The statistics are frequently hopelessly muddled (and occasionally just plain wrong on their face), the studies are set up poorly (as in, I would expect better study designs from first year students), or they are obvious cases of plagarism.

      EX: does early fertility predict future fertility. "We div

    • by Hentes ( 2461350 )

      Assuming that all fraudsters get caught which knowing the situation of medical science is very far from the truth. The paper doesn't talk about the number of erroneous articles, only the ratio between the number of frauds and the number of genuine mistakes.

      • Respectfully, you're compounding the error by referring to "all fraudsters" and the "situation of medical science," implying, by language, that this is a much larger problem than statistics show when considering the enormous volume of scientific articles. I'm not a scientist but I'm very good at interpreting numbers.

        I didn't say that fraud does not exist, or that there isn't pressure to produce publishable results that might affect accuracy or ethics (on occasion.) I said that this is a much smaller proble

    • http://pmretract.heroku.com/byyear [heroku.com]

      Seems like despite the small percentage (say, like CO2 ppm in the atmosphere), the trend is alarming.

    • Maybe the summary does. The article itself states pretty revealing numbers 96 in a million vs 10 in a million. These are hardly scandalous. But they are indicative of poorly managed incentives. Examining incentives is well within the purview of ethicists. The article is solid. Slashdot summary is what it is.
  • And that's why.... (Score:4, Insightful)

    by roc97007 ( 608802 ) on Tuesday October 02, 2012 @01:25PM (#41528775) Journal

    this [slashdot.org] is a bad idea.

    • Why? Are you suggesting that the scientific studies are suddenly somehow not living up to standards of repeatability and peer-review?

      We already have ways of dealing with these issues...

  • by Anonymous Coward

    It would be fun to know how many of those were made in China (a country with a record of forged and fraudulent papers) and how many are from the US.

  • by drerwk ( 695572 ) on Tuesday October 02, 2012 @01:28PM (#41528833) Homepage
    Misconduct, Not Error, Is the Main Cause of Medical Scientific Retractions

    Other than Hendrik Schön are there some in Math or Physics that are as likely to commit misconduct?
    http://www.nature.com/nature/journal/v418/n6894/full/418120a.html [nature.com]
    • by ceoyoyo ( 59147 )

      Life sciences. Not medical. That was in the first sentence of the summary.

      Yes, math and physics have issues with misconduct. The article you link to mentions several physical scientists who think it's a problem. You identified a famous one. Retraction Watch lists others, quit a few in chemistry for some reason. Complete fabrication might be a bit less common for the reasons mentioned in your article, but I have no doubt that there's data pruning, faking extra results because you don't have time to do

    • by fermion ( 181285 )
      And this is more proof that life science, medical science, about half of the articles seems to be "medical research" is not science. It is based too much on what people want to believe, too much on making a profit off pushing drugs, too little rigorous science. We know that many articles are paid for, written by ghost writers. We know that drug dealers want the drugs to be safe for kids, but really don't know or won't pay to do the proper research. We know that cancer is a business, and the research is
  • It's not like they *intended* to get caught.

  • If a researcher follows proper procedure but ends up with an incorrect result, it's still valid science. Perhaps it's the exception to some theory that will lead to later breakthroughs in the future. Simply being incorrect is not a reason to retract. Rather, a retraction is wiping the slate clean, hoping to forget that the research was ever done. The only reason to do that is if the research itself was unethical.

  • That is like suspected murder. It needs to be clearly proven or the accused needs to admit to it. Just because there is a whisper campaign alleging fraud from someone doesn't mean it is automatically the case.

    An honest journalist would have separated "demonstrated fraud" from "suspected fraud".
  • by Anonymous Coward

    Turns out they made up all the retraction numbers

  • by Jimmy_B ( 129296 ) <jim@jimrando m h . org> on Tuesday October 02, 2012 @02:06PM (#41529355) Homepage

    You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

    • You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

      Bingo. Mod to +5.

    • Parent is right. Small errors which don't affect the outcome are published as short "correction" notes. Larger, more subtle errors are corrected by the author and/or whoever noticed he was wrong writing a new paper which critiques the old one. But the original paper remains, because it's a useful part of the dialogue.

      (And *that* is why you should always do a reverse bibliography search on any paper you read.)

  • by Ichijo ( 607641 ) on Tuesday October 02, 2012 @02:12PM (#41529415) Journal

    The papers responsible have been retracted.

    We apologize again for the misconduct. The papers responsible for retracting the papers that have been retracted, have been retracted.

    The researchers hired to write papers after the other papers had been retracted, wish it to be known that their papers have just been retracted.

    The new papers have been completed in an entirely different style at great expense and at the last minute.

    ---

    Mynd you, m00se bites Kan be pretty nasti ...

  • The whole idea of an academic ecosystem distinct from the reality that the rest of the world operates in is an elitist adaptation of medieval socio-political structures. Granting someone an insulated job from which they can not be removed is ridiculous under any conditions. Whether someone publishes a peer reviewed article on something is irrelevant to whether they know what they are talking about in the current model.

    The REAL peers are the folks doing work in the profession day in and day out. As a rule

    • by Aardpig ( 622459 ) on Tuesday October 02, 2012 @04:18PM (#41531097)

      The REAL peers are the folks doing work in the profession day in and day out.

      As an astrophysicist in a research University, I'd like to know where these REAL peers are. I thought I was the expert, but now you tell me there's someone working hard at an astrophysics day job — so hard, in fact, they're too busy to review the papers I write while quaffing champagne by the bucket-load in the penthouse suite of my ivory tower.

      I'm all ears.

    • The REAL peers are the folks doing work in the profession day in and day out. As a rule most peer reviews are conducted by people with a decidedly academic focus - the experts in the field are working day jobs that don't afford them time to participate in silly self congratulatory exercises.

      And in most scientific fields, those folks are overwhelmingly to be found at academic institutions, and most of those who aren't in academia are in government. Corporate R&D is almost all "D" these days. There used to be a lot more research and publication, and peer review, by people outside academia--in light of your username, you might want to consider the history of Bell Labs, and how sad that history's been in recent years.

  • Getting it wrong an important part of doing science. Papers with errors should be corrected by new publications, not retracted. The incorrect paper inspired the correct one, and so is a useful part of the dialogue. Also, anyone else who has the same wrong idea can follow the paper trail, see the correction, and avoid making the same mistake again.

    Classic but extreme example: the Bohr model of the atom, with the electrons orbiting the nucleus like planets around a star. It's wrong. Very wrong. But we s

    • We also teach it because it works. Sure it's wrong. Sure it gives the wrong answers in a lot of cases. But it gives the right answers for some useful cases.

      And it isn't a classical description. It's a quantum theory (not quantum mechanics though).

  • Since PubMed was the source of the data, it really is only applicable to medical science. I'd like to see how the rates of misconduct compare between different kinds of scientific publications.
    • Because http://www.ncbi.nlm.nih.gov/pubmed/23019611 [nih.gov] is a medical science paper?

      • Interesting. That's outside their declared scope: "PubMed comprises more than 22 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites."
        • Yeah, my day job involves processing pubmed data. It's a true PITA when irrelevant (for the stuff we care about) stuff like that comes in and messes with the author lists. And it's not uncommon...

  • One part of the problem is that peer review is set up in a way to catch mistakes, not really to vet for misconduct. I have no idea what would be required to properly vet for misconduct, but I'm guessing that it should be a good idea to statistically analyse any numbers presented, that should catch the most blatant cheaters.

Keep up the good work! But please don't ask me to help.

Working...