Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Medicine Science

How Common Is Scientific Misconduct? 253

Hugh Pickens writes "The image of scientists as objective seekers of truth is periodically jeopardized by the discovery of a major scientific fraud. Recent scandals like Hwang Woo-Suk's fake stem-cell lines or Jan Hendrik Schön's duplicated graphs showed how easy it can be for a scientist to publish fabricated data in the most prestigious journals. Daniele Fanelli has an interesting paper on PLoS ONE where she performs a meta-analysis synthesizing previous surveys to determine the frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct. A pooled, weighted average of 1.97% of scientists admitted to having fabricated, falsified or modified data or results at least once — a serious form of misconduct by any standard — and up to 33.7% admitted other questionable research practices. In surveys asking about the behavior of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices. Misconduct was reported more frequently by medical/pharmacological researchers than others. 'Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct,' writes Fanelli. 'It is likely that, if on average 2% of scientists admit to have falsified research at least once and up to 34% admit other questionable research practices, the actual frequencies of misconduct could be higher than this.'"
This discussion has been archived. No new comments can be posted.

How Common Is Scientific Misconduct?

Comments Filter:
  • by thetoadwarrior ( 1268702 ) on Saturday May 30, 2009 @10:32AM (#28149641) Homepage
    Scientists are humans too and a job won't change some humans from being cheats.
    • by dword ( 735428 )

      The issue here is, when you're doing things like stem cell research, the future of human kind is in your hands. This is like saying "we shouldn't put people in prisons, because they're not animals and being killers or thieves doesn't make them animals". Unfortunately, you're right, because almost anyone can do "research" today.

      • Re: (Score:2, Insightful)

        by fooslacker ( 961470 )

        The issue here is, when you're doing things like stem cell research, the future of human kind is in your hands. This is like saying "we shouldn't put people in prisons, because they're not animals and being killers or thieves doesn't make them animals". Unfortunately, you're right, because almost anyone can do "research" today.

        I don't really understand your point about people and prisons. I suspect there is a typo or something in there but I would like to address the first sentence.

        The level of importance of a task doesn't make people more or less likely to cheat. And when I say "people" I mean a sample community en masse (in this case the research community). I suspect there is little that will make some people cheat and other cheat quite easily so I'm really talking about the statistical chance of the random community mem

    • by rzekson ( 990139 ) on Saturday May 30, 2009 @11:19AM (#28149993)
      Moreover, you rarely become a professor at a major university or some other distinguished position only on the basis of being talented; it is much more important that you are skilled at writing and inter-personal politics, manipulative both in terms of being able to sell your research and in terms of luring grad students, junior researchers and funding agencies to work for you or to pay you. Unfortunately, the same manipulative skills you need to acquire to become successful make you potentially more capable of cheating. I don't mean to insult anyone here by implying that it will actually make you more likely to cheat; only that it's easier for you to cheat because you are skilled at manipulating others (this being said, arguably the line between skilled manipulation and outright cheating is not as crisp and well-defined as one might hope). Indeed, sometimes cheating happens unwillingly; I have witnessed it on multiple occasions, when a famous professor would write a pile of an outright bullshit in a paper; not intentionally, but because his bullshitting skills and confidence were orders of magnitude above his raw technical competence.
      • Other factors are ego, an intense desire to feel needed/wanted etc. These emotions along with the points that you made can lead one off the straight and narrow path.
      • Re: (Score:3, Insightful)

        by Miseph ( 979059 )

        One could also argue that if one is manipulative politic in many aspects of their life, they are also most likely manipulative and politic in the others as well. If somebody were always drunk at home, drunk at work, and drunk while in public... why the hell would you think they're always sober while driving? If a scientist is always manipulative with their family, their co-workers, and while attending social events... why the hell would you expect they're always forthright while doing research?

      • by yali ( 209015 ) on Saturday May 30, 2009 @02:56PM (#28151691)

        you rarely become a professor at a major university or some other distinguished position only on the basis of being talented

        I assume you mean "book-smart at science," in which case, you're right.

        it is much more important that you are skilled at writing

        Being able to effectively communicate your results is critical for scientists. That isn't a bad thing. There's no point in doing science if you don't or can't tell anybody what you did and why it matters.

        and inter-personal politics, manipulative both in terms of being able to sell your research and in terms of luring grad students, junior researchers and funding agencies to work for you or to pay you.

        You're putting a bad spin on this with "manipulative." Most science nowadays involves teams and collaborations; very few discoveries are made by the lone guy in his garage with a bunch of test tubes. If you are working in any area where you cannot go it completely alone, you need to be something that's an even dirtier word on Slashdot than "manipulative." On top of knowing your science, you need to be an effective... wait for it... manager (gasp!).

        As for the funding... most funding is peer reviewed. What is wrong with telling scientists that they cannot have scarce resources unless they can convince experts in their field that the research is worth funding? Can you think of a better way to fund science?

        Unfortunately, the same manipulative skills you need to acquire to become successful make you potentially more capable of cheating.

        Do you have any evidence to back this up? Good people skills and Machiavellian manipulation are not the same thing.

        It seems more plausible to me that if you're a scientist who works in a highly collaborative team environment and regularly gets funding from the bigs (NSF, NIH, etc.), it would be harder to last as a successful cheat. Somebody who works mostly solo or with just a couple of grad students can send off their results to a journal, and they just have to look plausible to the editor and journal referees. The socially skilled scientist who has a big team has to slip their cheating past the grad students who did the hands-on work. If they're attracting lots of funding, they are going to get close scrutiny, and it's going to be hard to keep getting grants if nobody can replicate their work. And if they are well networked and therefore well known, there are going to be lots of people trying to replicate the results so they can build on them.

        I have witnessed it on multiple occasions, when a famous professor would write a pile of an outright bullshit in a paper; not intentionally, but because his bullshitting skills and confidence were orders of magnitude above his raw technical competence.

        I don't know about your field, but in my experience these are the people with enormous targets on their backs. Good scientists are smart enough to recognize bullshit, or at least suspect it. And the young upstarts, who haven't been around long enough to be impressed by Professor X's reputation, see an opportunity to make their bones by taking down a famous blowhard. The system ends up self-correcting pretty well.

    • by syousef ( 465911 ) on Saturday May 30, 2009 @11:39AM (#28150147) Journal

      Scientists are humans too and a job won't change some humans from being cheats.

      I see about half a dozen comments along those lines, but giving up and saying "c'est la vie" isn't constructive. Our scientific systems and institutions should have better checks and balances. Many jobs/professions including monitoring and auditing to prevent corruption as standard. Some are better, some are worse. Regardless, the checks and balances on scientists exist but are antiquated an ineffective. The institutions and traditions are outdated. We can do better!

      • by ColdWetDog ( 752185 ) on Saturday May 30, 2009 @11:54AM (#28150263) Homepage

        Our scientific systems and institutions should have better checks and balances.

        They do: science. While you can game the system (grants, publications, fame and fortune) you can't game science forever. If it's real, it's repeatable. Somebody can do it (if it's important enough). If it's not important enough and the information gets stuffed in some hard drive somewhere - no big deal.

        Sure, money can be wasted. People can be injured. Reputations can be trashed. But in the end if it's real and important someone else will look into it and either confirm or deny it. It may take years or decades, but it will happen.

        | Patience.

        • by tgv ( 254536 ) on Saturday May 30, 2009 @04:38PM (#28152691) Journal

          Have you got any idea how difficult it is to refute an experimental outcome, at least in the less exact sciences? It's not only that you can create a gazillion possible deviations between your set-up and the one from the article (making direct comparison difficult), you will also need to run it with a pretty large subject group if you want to have enough power (making it expensive and time consuming), and then you're going to have problems publishing your article (reviewers and editors don't like null effects). In short, there is no profit in it. Most people, and researchers are people, are in it for the money, prestige, whatever, and replicating a study generally doesn't get you funding, prestige, publications. So guess what happens? The world, at least the part that does experimental psychology, gets stuck with 90% junk publications. And that's being conservative.

          • by winwar ( 114053 ) on Saturday May 30, 2009 @06:55PM (#28153881)

            "Have you got any idea how difficult it is to refute an experimental outcome, at least in the less exact sciences?"

            The inability to recreate the experiment is a basic method to refute the outcome.

            If you can't get their procedure and/or their procedure doesn't work then the outcome is very questionable...

        • by syousef ( 465911 )

          Sure, money can be wasted. People can be injured. Reputations can be trashed. But in the end if it's real and important someone else will look into it and either confirm or deny it. It may take years or decades, but it will happen.

          It shouldn't take years or decades. We should have a better system....and while it's not a perfect world, people sit there and assume the messed up system we have is the best we can do. That just isn't true.

        • by dbrutus ( 71639 ) on Saturday May 30, 2009 @09:25PM (#28154837) Homepage

          Here's a sensible requirement. If you submit a paper, you submit your data and sufficient information that anybody can rerun your stuff. The whole MBH 98 idiocy was largely about how climate scientists would dance around releasing their data and methods. In the UK, if you take the public's money, you can't do that. In the US you can. The US should follow the UK on this one.

      • Re: (Score:3, Insightful)

        by phantomfive ( 622387 )
        What do you suggest? For the most part they seem to be working, to me. How would you change them?
      • Re: (Score:3, Insightful)

        by Jeff DeMaagd ( 2015 )

        Regardless, the checks and balances on scientists exist but are antiquated an ineffective. The institutions and traditions are outdated.

        And...?

        I'd be more interested in what you think would fix it rather than another statement that the problem exists, because that's not all that constructive either. I take it that you are saying that peer review isn't a sufficient means of monitoring and auditing?

      • I see about half a dozen comments along those lines, but giving up and saying "c'est la vie" isn't constructive. ... We can do better!

        You mean, you don't want us to objectively seek the truth?

        Sorry, couldn't resist.

      • Re: (Score:3, Funny)

        by Requiem18th ( 742389 )

        The solution is to give up on science and rely on religion, Ray Comfort is a geniusa and Kevin Hovind was right all along! </sarcasm>

      • Our scientific systems and institutions should have better checks and balances. Many jobs/professions including monitoring and auditing to prevent corruption as standard.

        There is checks and balances. The first is that you don't get into research if you want to cheat your way to the top, you get into law school or politics if you want to do that. The second is repeatability. The third is peer review. What other checks and balances are there? I can't think of any that would actually do anything besides slow down research.

        Outdated? I don't see anything to replace it.

      • If your incentives are aligned wrongly, policing will only go so far: it's like trying to destroy black markets by hiring more cops.

        Most people don't go into science because they want to fabricate data. Sure, people want to be famous, but most of this data being fabricated isn't even anything that would make you famous (only a handful of high-profile examples are). You need a culture that neither encourages nor rewards attempts to meta-game the academic system, and a system that does not encourage gaming it

      • "Our scientific systems and institutions should have better checks and balances. Many jobs/professions including monitoring and auditing to prevent corruption as standard."

        checks = independent repeatability
        balance = independent peer review

        "We can do better!"

        If any anyone has a more robust system with a better track record than science, I'm all ears.
    • Re: (Score:2, Insightful)

      by yourassOA ( 1546173 )
      But scientists like doctors are supposed to be trustworthy. They are experts who's opinions seem to have more weight than the average person. Now what mechanism is in place to check up and verify everything they do? Were is the regulation/punishment for breaking regulations?
      Take construction for example. If I build a house there are electrical, plumbing, foundation, insulation and final inspections. Why? Because people cheat and someone has to ensure no one is cheating. If the rules are not followed someon
      • Now what mechanism is in place to check up and verify everything they do?

        It's not possible to verify every experiment that is run and every conclusion that is reached. That said, major experiments and conclusions are re-run and re-tested at various levels all of the time. Science is constantly building upon the results of previous experiments, and it tends to become fairly obvious if an important conclusion is wrongly reached because no one is able to replicate it or people find evidence that directly contr

      • "What makes scientists so much better than the average person that they don't have to accountable like everyone else?"

        Not accountable? The fake stem cell guy has lost his carreer and will never get another research job.

        "Who is regulating scientists?"

        When talking about known hazzards it's usually other scientists working for government regulators (eg:FDA,EPA,etc). What standards do you suggest for monitoring/regulating the unknown? Besides TFA is talking about fraud, there is no central authority to
    • I find this funny, because over 90% of wikipedia is totally accurate. :P

      Maybe I should be citing it as a source, rather than scientific journals. ;)

  • Of all scientific articles I have read there are no apparent copy-cat actions I could even think of. However, pure ignorance and clumsiness are very very frequent. I can live with typos and errors, if they don't change the big picture.

    However, cheating is another thing. I am aware of people presenting facts technically correct, but in deceitful manners which give the impression the background research is well done. However, scrutinze what was actually done, it falls apart. Yet, what can you do about that. I

    • by Mindcontrolled ( 1388007 ) on Saturday May 30, 2009 @10:43AM (#28149717)
      True, giving a certain "spin" to you interpretation of correctly presented data is common - but not necessarily a terrible thing. As you said, it it will be scrutinizied and filed in the big "misinterpretation" folder. As for active misconduct - it probably happens more often than reported, but thankfully gets caught internally most of the time before it is published. I can only offer anecdotal evidence, but while doing my PhD work, one of my colleagues tried to get away with made-up results. Head of department smelled a rat, checked the data and promptly fired the guy without hesitation. PhD student one day, unemployed with revoked visum the next day....
  • by erroneus ( 253617 ) on Saturday May 30, 2009 @10:36AM (#28149673) Homepage

    It is often cited that crappy, broken or incomplete code is often shoved out the door by business in order to meet deadlines. Quality or even truth are sacrificed for business reasons.

    Why would R&D be any different? Big business often exhibit quota and other incentives for patent filing and the like. Outside funding sources pressure even pure research activities so that they can get their hands on new technology or even for silly things like a name being recorded as "first to" do something.

    I am actually a bit surprised that the numbers aren't a bit higher.

    • by BrokenHalo ( 565198 ) on Saturday May 30, 2009 @11:21AM (#28150013)
      It is often cited that crappy, broken or incomplete code is often shoved out the door by business in order to meet deadlines.

      The reason why R&D is different from software developers is because the latter usually don't need to present conclusions or premises to the community at large. It can (and often does) hide the source and get away with saying "no warranty yada yada..."

      By presenting your research in reputable journals, you are exposing it to the examination and criticism of your peers. Thus in theory anyone else can pick up your work and reproduce it. One aspect of Hwang Woo-Suk's work that brought about his demise was that others failed to be able to reproduce his work. Unfortunately for him, his claims were so grandiose that alarm bells rang and people started looking at his work more closely.

      The eventual fallout can be seen as evidence that the system works. We have little way of knowing how much dodgy work slips under the radar in the short term, since people don't get paid much for reproducing other scientists' work, but at least there is a mechanism where it CAN happen.
    • Well, in R&D, you have to produce a product. If you don't produce something, they the cancel it and move on. If you are just doing research at a university, you can possibly drag it out a bit longer. The business is about results, and if the ROI doesn't look good, your project gets canned. A university or thinktank is still interested in the ROI, but not as much, if you can show some merit, and still get funding from investors or grants, then you will probably be able to continue.
      • Re: (Score:3, Insightful)

        Ever heard the phrase "publish or perish?" Trust me, there's just as much pressure in academia to produce results within a specified time frame as there is in industry. The organization's measurement is different -- publications vs. ROI -- but the situation of the individual researcher is much the same.

    • Re: (Score:2, Insightful)

      by Foggiano ( 722250 )

      I don't quite see how you came to this conclusion, especially given the text of this article. The authors were specifically looking at misconduct in research published in peer-reviewed journals. The vast majority of material published in these journals originates from universities, not industrial research and development.

      I would suggest, in fact, that misconduct is probably at least as common if not more so in a university environment than in an industrial one. Tenure-track professors are under enormou

    • Re: (Score:3, Insightful)

      It is often cited that crappy, broken or incomplete code is often shoved out the door by business in order to meet deadlines. Quality or even truth are sacrificed for business reasons.

      Why would R&D be any different?

      In a sense R&D is worse, in that its farther removed from corrective mechanisms. If you sell consumer tech that doesn't work, chances are fairly good that it will harm your business. Depending somwhat on your field, if you publish research that is arguably correct but meaningless or highly misleading, nobody will care. Your funding source doesn't even care, as long as it looks enough like real science that they can get away with continuing to support it.

  • Relative to what? (Score:5, Insightful)

    by Subm ( 79417 ) on Saturday May 30, 2009 @10:38AM (#28149683)

    If we accept that scientists are human like anyone else, we accept that scientists, like others, will make mistakes that get bigger and go more wrong than they anticipated. Some may intentionally commit fraud.

    How common is scientific misconduct relative to other types of misconduct seems a more relevant question.

    Also: What can we do to decrease it and how can we lessen its impact.

  • by Firkragg14 ( 992271 ) on Saturday May 30, 2009 @10:39AM (#28149691)
    But if there are so many examples of scientists providing fake data how do i know the results of the survey in the FA are correct?
    • The question you really should be asking yourself is: Do you like the results or do you need to fake your own?

  • corrupt police officer ? corrupt politician ? cooking book finance people ? manager breaking some rules or "making up data" to justify their projects ? And I padd many others. Is it above or below average ? If below average then the reputation is earned.
    • At least in science there is a built-in way of self-correction. Publish all the made up crap you want, but when no one can duplicate the feat don't be surprised when the community calls you out on it. Tell me where you go to find the guy double checking the work of the corrupt police officer or judge when they perjure themselves to ruin your life and your ability to defend yourself. Find me the people replicating every aspect of your grafty mayor's work to make sure he's not full of shit...

      I can't think of anywhere else in life that there are as many checks and double checks and accountability as in the field of scientific research. Just because no one catches it immediately means nothing. If it was fake no one will be able to replicate it. A single study proves very little and likewise does very little damage, so if no one cares enough to replicate it chances are slim that it will cause harm.
  • Really? (Score:5, Funny)

    by Jeff Carr ( 684298 ) <(ofni.rracffej) (ta) (moc.todhsals)> on Saturday May 30, 2009 @10:42AM (#28149707) Homepage
    And how exactly are we supposed to believe her study?
  • by TinBromide ( 921574 ) on Saturday May 30, 2009 @10:43AM (#28149709)

    and up to 33.7% admitted other questionable research practices.

    I wonder if this refers to shortcuts taken because its common knowledge, Such as, if you use water as a control lubricant, you might test its wetness, density, purity, viscosity, etc, to compare against water with a slippery polymer in it. I wonder if these "questionable" practices involved taking distilled water, making sure its pure distilled water, and then pulling the other factors off of charts for distilled water or if "Questionable" means something far worse.

    The reason i bring this up is because hindsight is 20/20 and everybody knows every mistake that they've made, if they're smart and that's what they're fessing up to.

    • Re: (Score:2, Insightful)

      Aye, I think I go with your interpretation here. I personally would confess to "questionable practices" of that kind - not thoroughly testing each and every factor that might have influenced your experiment, because it is "common knowledge" that the factors in question won't matter. Deadlines looming ahead, supervisor chewing your ass, you take the shortcut. No research is perfect. In hindsight you always find some things that you should have tested to be really sure, but real life is not perfect. I'd file
  • Yeah... (Score:3, Interesting)

    by dword ( 735428 ) on Saturday May 30, 2009 @10:45AM (#28149731)

    ... and 78% of the surveys are made up on the spot.

    • Re: (Score:3, Informative)

      by icebike ( 68054 )

      Meta Analysis does that.

      There are more than a few incidents of Meta Analysis including the same data set from multiple places simply because it was shopped around for publication under different names.

      Meta Analysis combines vaguely related studies, using data sets of suspect quality, which you don't fully understand, which have already undergone filtering and editing you won't find out about, and which were collected under conditions you don't know, for motives you can't be sure of, by people you don't know

    • Re:Yeah... (Score:4, Funny)

      by Epistax ( 544591 ) <epistax AT gmail DOT com> on Saturday May 30, 2009 @11:34AM (#28150111) Journal
      Check your math. I got 83%.
  • not surprising (Score:5, Insightful)

    by Anonymous Coward on Saturday May 30, 2009 @10:50AM (#28149765)

    Disclaimer: I'm a scientist.

    Scientist will behave much better as soon as society (or perhaps the government at least) understands that if you want reliable information, you actually have to treat your scientist well.

    Now, do not got me wrong, some countries, especially the US, invest quite a lot in science. But the problem is that the whole system is rotten to the core. It makes almost no sense at all for a young graduate to stay in a University/Institute. Pay will be low, and you have (in most countries) no job security. In Europe you either get a nice job at a company, or you go around taking post-docs for 5-10 years, hoping to get lucky. Working crazy hours with no holidays. For most, in the end, they go to a company anyway (having lost quite a lot of money in the process).

    Often you are expected to go abroad, and unless you are lucky this leaves you with no good way to take care of your pension. Then if you want to return, somebody else took your place at university.

    There is 2 ways to stay in the system: either you are lucky or you lie like hell.

    Now, people may say that if your good you do not need luck. But remember that for high impact publications you need a lot more then good ideas and good skills. In research it is perfectly normal to conclude after 2 years that your hypothesis is false. This is great science, it also is hardly publishable in a good journal. People like positive results, and the reviewer system actually encourages you to confirm generally accepted ideas, not to falsify them.

    Well, I could go on but I am sure others will.

    To be honest, I do not even get angry anymore when I suspect someone may have done something "questionable". It's just sad.

  • by syousef ( 465911 ) on Saturday May 30, 2009 @10:54AM (#28149803) Journal

    The truth is the way that scientific institutions are set up isn't very scientific. There is definitely an attempt at oversight and impartiality but it's very easily corrupted by a wide variety of people with a wide variety of interests and ulterior motives. There aren't nearly enough checks and balances.

    There are many things wrong with the system. Some include:

    - Almost anyone can commission a study, write a book etc. and it's left to the scientific community to place value on that work. Viewed on it's own, without knowledge of the scientific community's opinion it can be difficult to tell how valid the work is. For example Wolfram's "New Science" has been largely debunked as mostly a rehash of old ideas (minus accreditation) but it took some time for this to become clear and in the meantime it was popularized in the press as a breakthrough work.

    - The only real form of moderation is whether or not work has made it into a respected journal. Other scientists are then expected to publish corroborating work etc. However, until this is done, it is very difficult to judge the validity of the work, and papers get published that are later discredited. (Cold fusion anyone?) Likewise, work that should be published is often initially rejected. The primary motivation of a lot of the scientific journals is financial gain. In fact the entire publishing system is an antiquated remnant of the last 2 centuries and doesn't belong in an Internet connected world, yet publication is still the primary tool by which a scientist's work gets recognized.

    - Speaking of antiquated the institutions, committees and governing bodies of science are about as scientific as a mother's group - it's all professional bitching and posturing for status. Real monkey hierarchy stuff. A lot of decisions get made on the basis of status. It's particularly bad for applied science professions like the medical profession where you hear stories about doctors who should have been prevented from practicing continuing for many years before being disciplined or quietly removed. At the senior level, scientists are often more politician than anything else as then need to secure funding and approval from political bodies. Then you see students who have to work their way up in status being treated like crap "paying their dues" as noted in a story posted a few days ago about a student who died in a chemical fire.

    - Speaking of status, there is an emphasis on using scientific jargon to exclude the community at large. Some scientific ideas require complex specialized language and university post graduate mathematics to understand, and so require such specialized language. However even simple concepts must be described in overly complex specialized language to be accepted for journal publication. This is absolutely backward. We should have a system that requires simplified language where possible and a layman's overview attached early in the document. Instead, reading a scientific paper if you're not a specialist in the field is an art that you learn when you do post graduate work. If you assess a published article for readability you'll find the statistics you generate tell you that it's dense and difficult to understand. There are journals and subjects that allow simpler and informal
    language but they are the exception rather than the rule and usually apply as addendum publications for applied fields. (Again I'm thinking of medicine. My own post grad work is in astronomy so I'm very much a lay reader when it comes to medicine, and when I've tried to read medical papers it's usually been an interesting excercise). Any real simplified content seems to get presented in slide form at conferences and presentations are often a better way of getting an overview.

    I could go on about the shortcomings of various scientific institutions but I won't.

    My point is that when you have a system that is so open to corruption, with so few checks and balances, and so much baggage inherited from institutions that began in the dark ages, it's no surprise that you end up with science that's much less than perfect.

    • Re: (Score:3, Insightful)

      he primary motivation of a lot of the scientific journals is financial gain. In fact the entire publishing system is an antiquated remnant of the last 2 centuries and doesn't belong in an Internet connected world, yet publication is still the primary tool by which a scientist's work gets recognized.

      Let's not go there, lest i shall rant all evening. I am due for a pub-crawl, don't wanna miss it...

      Short version of the evil socialist scientists rant.. I do government funded research, then have to PAY a private enterprise to publish my data, peer review is done for free by other scientist, and then I have to PAY again for reprints and the money-grubbbing bastards charge through the nose for the subscription, too, so that the local library can't even afford the online access to the journal I published in.

      • Re: (Score:3, Informative)

        by droptone ( 798379 )
        Journal of Negative Results in Biomedicine [jnrbm.com]. Personally, I'd prefer it in a simple website with a good database attached, especially for the social sciences where there are interesting negative results that may come as an afterthought (birth order effects, finger digit ratios before they became popular in the past ~10 years, etc in psychology); that may or may not be the case for other fields.
    • by icebike ( 68054 )

      My point is that when you have a system that is so open to corruption, with so few checks and balances, and so much baggage inherited from institutions that began in the dark ages, it's no surprise that you end up with science that's much less than perfect.

      Its also no surprise when you end up with science that is horribly incomplete.

      We need to place more emphasis on using the internet as a repository for non-published works. (Like DeepDyve http://www.deepdyve.com/corp/about [deepdyve.com] ).

      With this comes the boogie man of the kook "scientist". (Which unfortunately includes any scientist who is not yet published).

      We need to start using something like the Web of Trust found in key signing http://en.wikipedia.org/wiki/Web_of_trust [wikipedia.org] to document the credentials of scientists

    • by glwtta ( 532858 )
      Almost anyone can commission a study, write a book etc. and it's left to the scientific community to place value on that work. Viewed on it's own, without knowledge of the scientific community's opinion it can be difficult to tell how valid the work is.

      Well, yeah, if you are not qualified to judge the validity of a scientific work, you will not be able to judge it. What do you propose instead? That people not be allowed to publish studies and books?

      In fact the entire publishing system is an antiquate
      • by syousef ( 465911 )

        Well, yeah, if you are not qualified to judge the validity of a scientific work, you will not be able to judge it. What do you propose instead? That people not be allowed to publish studies and books?

        No, just that there be a system of rating the quality of the work.

        You've answered your own question there: the value of the publishing system is no longer the dissemination of publications (ie anyone can just dump their research on the internet now) but the review process and the reputation those journals have

        • by glwtta ( 532858 )
          Is there any discipline where I can pick up a paper and immediately tell the quality of the work when I pick it up (or look it up somewhere without understanding the entire topic?

          Well, no. You can't judge the the quality of something without understanding it. I'm not sure what you mean.
    • Your list applies to the academia as a whole, and actually a lot more to non-science disciplines. Not that it contradicts your points, however.
    • (Again I'm thinking of medicine. My own post grad work is in astronomy so I'm very much a lay reader when it comes to medicine, and when I've tried to read medical papers it's usually been an interesting excercise)

      I think there are three main hurdles to comprehending scientific literature:

      1) Obtuse grammar. This is universal. Why describe something in five words when you can use twenty?

      2) Jargon: Every field has its jargon, and may co-opt words from the vernacular and give them very specific meanings. This gets in the way of a simplified description.

      3) Intuition: Quite a lot of papers don't properly explain the intuition behind what they do. This is particularly rampant in fields that depend strongly on math. The rea

  • Faking the data. (Score:4, Insightful)

    by wfstanle ( 1188751 ) on Saturday May 30, 2009 @10:54AM (#28149807)

    Definitely they sometimes fudge their data so that it will support their theories. Scientists are human and not perfect, it's part of human nature. That is where peer review comes in. A true scientist s work has to stand up to peer review and this is where the fudging of data is often uncovered. The problem is that much of the research going on is cloaked in secrecy by governments and corporations and proper peer review doesn't happen.

    This brings to mind an incident in history where the scientist was right but his data was just too good. I'm talking about Gregor Mendel and his work on genetics. Later statistical analysis of his data indicates that it was very unlikely that he got that data. He probably got very close to the experiment result that he predicted but it was not good enough so he fudged his results. It wasn't until long after that this inconsistency in the data was uncovered. Was he right? Absolutely he was but his data is suspect nonetheless.

  • by JanneM ( 7445 ) on Saturday May 30, 2009 @11:00AM (#28149855) Homepage

    2% - one in 50 - committing fraud to get ahead (or simply to keep their job) in a very competitive, volatile career environment. Sounds like it's in the right ballpark, and probably comparable to other professions. Some people are so career and status driven, and so unconcerned with the effects of their actions on other people, that they will break rules and cut corners no matter what the field.

    I do question the other figures though, simply because "questionable research conduct" is such a very nebulous kind of categorization. You can delimit it in very different ways, all perfectly reasonable. You could even effectively decide which number you want then define the term in such a way that you reach it (a practice that would most likely be included in the term). Notably, the author excludes plagiarism, even though that is a serious offense in research for good reason, and one that I'd expect most surveys to include, not drop.

    Also, the numbers for incidents by colleagues is rather pointless, since there is no indication of how many those colleagues are. If each participant has had a minimum total of eight colleagues altogether in their career up until this point, then the 14% rate fits very well with the self-reported 2% above. But of course, the participants do not know how many incidents they missed, and the number of times the mistakenly thought fraud was taking place is unknown. I would be very hesitant in trying to read anything at all into the numbers about witnessed incidents.

  • Scientists are humans, just like anyone else. Frankly, I think 1.97% is pretty low considering it is the combined total of all "fabricated, falsified or modified data or results". Notice that all of those aren't quite equal either. "Tweaking" results to tease out the answer you want (while still unethical and damaging to scholarship) is not as bad as outright falsification. Especially since it is not always clear where the line is between "modifying data", and doing valid statistical analysis like throw

  • It starts while you are graduating.

    A big chunk, quite a huge piece of graduation diploms, certificates etc. (depends on each country) are based in the most rabid form of falsifications - "copy/past". They are presented as something new, at least as a "new" variation of a well known theme, however there is nothing new on it. Just the same stufff written in different words.

    The sad fact is that faculties and science departments accept it.

    The good thing is that the large majority of these graduates will stay we

  • I see plenty of comments here of folk expecting some scientists will do bad things for gain/fame/award. However, science demands reproducible results and peer review. That's a safety net that catches a lot of bad science.

    • peer review (Score:3, Informative)

      by Anonymous Coward

      Can we please put a stop to all these people citing peer review as a sort of wonder cure?

      I peer review a lot of papers. And yes, it catches a lot of bad science. But most of that is just, bad experimental design, bad writing skills, wrong conclusions, uninteresting stuff, etc.

      There is nothing I can do against some smart guy who makes up all the numbers, but knows enough of statistics to make it look plausible. It is often not feasible, or even impossible to redo the experiments. I never heard anybody do tha

      • Re:peer review (Score:5, Insightful)

        by ceoyoyo ( 59147 ) on Saturday May 30, 2009 @01:43PM (#28151001)

        Peer review may not catch the journal article, but it eventually catches the faker.

        The problem is, the public seems to think that one paper published in a journal translates into "this is true." It's not. Far more commonly than outright misconduct is studies that are preliminary, contain an honest error or are a statistical fluke.

        Journal papers are about sharing information, NOT about laying down Truth on the Record. When all the studies start consistently showing the same thing, THEN you can start thinking about believing it.

  • I find it odd that this is all based on Meta Analysis, which itself is still highly suspect.

  • The funding for the "research" is provided by an entity with an agenda other than pure research, e.g. having a vested interest in a particular outcome or finding. Nowhere is this more common that in the U.S. pharmaceutical industry, where entire ersatz journals have been published to provide the appearance of well-documented and peer-reviewed research.
    Beyond jailing those involved in such grand misconduct, I don't know where to draw the line, but I believe that separating profit from research, as far as po
  • gray area (Score:5, Interesting)

    by bcrowell ( 177657 ) on Saturday May 30, 2009 @11:26AM (#28150053) Homepage
    There's a big gray area. For instance, the Millikan oil drop experiment [wikipedia.org], which established quantization of charge, was arguably fraudulent. Millikan threw out all the data he didn't like, and then stated in his paper that he had never thrown out any data. His result was correct, but the way he went about proving it was ethically suspect.
  • by Paul Fernhout ( 109597 ) on Saturday May 30, 2009 @11:38AM (#28150135) Homepage

    From:
        http://www.its.caltech.edu/~dg/crunch_art.html [caltech.edu]
    """
    The crises that face science are not limited to jobs and research funds. Those are bad enough, but they are just the beginning. Under stress from those problems, other parts of the scientific enterprise have started showing signs of distress. One of the most essential is the matter of honesty and ethical behavior among scientists.

    The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists. There is little doubt that the perpetrators in these cases felt themselves under intense pressure to compete for scarce resources, even by cheating if necessary. As the pressure increases, this kind of dishonesty is almost sure to become more common.

    Other kinds of dishonesty will also become more common. For example, peer review, one of the crucial pillars of the whole edifice, is in critical danger. Peer review is used by scientific journals to decide what papers to publish, and by granting agencies such as the National Science Foundation to decide what research to support. Journals in most cases, and agencies in some cases operate by sending manuscripts or research proposals to referees who are recognized experts on the scientific issues in question, and whose identity will not be revealed to the authors of the papers or proposals. Obviously, good decisions on what research should be supported and what results should be published are crucial to the proper functioning of science.

    Peer review is usually quite a good way to identify valid science. Of course, a referee will occasionally fail to appreciate a truly visionary or revolutionary idea, but by and large, peer review works pretty well so long as scientific validity is the only issue at stake. However, it is not at all suited to arbitrate an intense competition for research funds or for editorial space in prestigious journals. There are many reasons for this, not the least being the fact that the referees have an obvious conflict of interest, since they are themselves competitors for the same resources. This point seems to be another one of those relativistic anomalies, obvious to any outside observer, but invisible to those of us who are falling into the black hole. It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors. Peer review is thus one among many examples of practices that were well suited to the time of exponential expansion, but will become increasingly dysfunctional in the difficult future we face.

    We must find a radically different social structure to organize research and education in science after The Big Crunch. That is not meant to be an exhortation. It is meant simply to be a statement of a fact known to be true with mathematical certainty, if science is to survive at all. The new structure will come about by evolution rather than design, because, for one thing, neither I nor anyone else has the faintest idea of what it will turn out to be, and for another, even if we did know where we are going to end up, we scientists have never been very good at guiding our own destiny. Only this much is sure: the era of exponential expansion will be replaced by an era of constraint. Because it will be unplanned, the transition is likely to be messy and painful for the participants. In fact, as we have seen, it already is. Ignoring the pain for the moment, however, I would like to look ahead and speculate on some conditions that must be met if science is to have a future as well as a past.
    """

    • Wow, thank you for linking that.

      That was written 15 years ago, and not much has changed. On the other hand, the more people in physics who think like this, the higher the chance we can start making changes.

    • It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors.

      And how would the Vice Provost know this?

    • by radtea ( 464814 )

      The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists.

      This was published at more-or-less the mid-point of the most active part of my scientific career, counting from the end of my M.Sc. to the end of my last post-doc. In that relatively short span of years I encountered at least two cases of outright scientific fraud: pure fabrication of data, either from experiments that were never done, or from data that had peaks

  • I am an experimental physicist in solid state physics. I think this subject in low to medium-prone to misconduct. Lets analyze the different aspects of it:

    a) Motivation: getting your thesis/paper finished quicker/better, getting research money

    b) Control Is there a control actually considering scientific behavior to be a fundamental good or is it just important that nothing is uncovered?

    c) Ways of misbehaviour (i only write down what happened in the range of what i have seen/recognized(e.g. in other groups p

  • by PPH ( 736903 ) on Saturday May 30, 2009 @01:06PM (#28150789)
    How many researchers are having sex with the lab chimps?
  • meta meta (Score:2, Funny)

    by solweil ( 1168955 )
    Oh yeah? Well, I'm going to do a meta-meta-analysis to see how common meta-analyses are fraudulently conducted.
  • This is one beautiful example of 'non research' crap being pushed out. They didn't do a proper 'field study' with well defined and controlled questions. They took a heterogeneous collection of other peoples published results and tried to mix them together. In this type of questionnaire based studies it is extremely important how do you define the question. It is also important to ask the same question in different ways and to control for the motives and the background of the person who answers that.

    For ex

  • by peter303 ( 12292 ) on Saturday May 30, 2009 @05:00PM (#28152879)
    With the cheating level of undergraduates rumored to be around half, I wonder have this declines to only two percent by the time you got your PhD? Two answers: (1) Science cheating is under-reported, or (2) scientists check each other results especially if they are important. I'm in computational physics where its fairly straight-forward to replicate another's results. Cheaters are discovered quickly. Other lab-based fields may not be as easy to get caught.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...