Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

How Science Goes Wrong 316

dryriver sends this article from the Economist: "A simple idea underpins science: 'trust, but verify'. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better. But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying — to the detriment of the whole of science, and of humanity. Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 'landmark' studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties. Even when flawed research does not put people's lives at risk — and much of it is too far from the market to do so — it squanders money and the efforts of some of the world's best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising."
This discussion has been archived. No new comments can be posted.

How Science Goes Wrong

Comments Filter:
  • by ChronoReverse ( 858838 ) on Thursday October 17, 2013 @06:03PM (#45158803)
    Are the numbers from this article just pulled out of a hat?
    • Are the numbers from this article just pulled out of a hat?

      But it was the very best kind of hat ...

      (apologies to a certain British mathematician)

  • No results that has not been replicated should be trusted. The problem is not that errors are made but rather that results are trusted before they are replicated.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Exactly. As a long term Computer Scientist, everyone in my field (Computational Linguistics) knows that published results are only good if they sound plausible and can be reproduced. This boils down to citations. Bad papers (on average) tend to have low citation counts.

  • In science, a lot of time and resources are wasted on testing ideas that other scientists had tested before, but who never published the results, simply because they weren't spectacular enough. Even though some negative results are obfuscated in papers reporting findings 'more worthwile' reporting, and even though some of the negative results are discussed at science meetings, many scientists have been trying to reinvent the wheel. Now how much is wasted? I don't know, but the result could be shocking.
  • "trust but verify" is a good idea in many areas -- relationships, law, security -- not just science. But it's especially important in areas where published results establish precedent and serves as the basis of new results. Else we end up with baggage that hampers future efforts. It's not just a matter of saying "oops, those results are invalid", we also have to ask "ok, what other research has those results affected, and how does invalidation change things?"

    • by geekoid ( 135745 )

      So how far back do you verify?
      If I am working on gravity theory, do I need to verify everything since newton?

  • That's faith man.

    How about actual proof?

    Seriously, how do these studies get published? I mean this "trust" thing goes both ways and is probably why some really cool results don't get published - like the famous http://en.wikipedia.org/wiki/Belousov%E2%80%93Zhabotinsky_reaction [wikipedia.org]

    From the wikipedia: "Belousov made two attempts to publish his finding, but was rejected on the grounds that he could not explain his results to the satisfaction of the editors of the journals to which he submitted his results."

    Wh

    • Imagine your the referee for a paper on an experiment at CERN.  Either you work at CERN and are not independent, or else you do not and do not have access to the resources to replicate the experiment.
      • That's a very good point. I'm sure you see where I'm coming from though - after all, the results of any experiement support or oppose the results of others. If a collider result opposes a non-collider experiement then the non-collider experiement can be modified in light of the collider results. Sure that would a bit inductive and fuzzy, but it's better than just trusting. Regardless, your argument is strong and illustrates the fundamental challenges of cutting-edge discovery and research. Thank you for
        • by geekoid ( 135745 )

          His point is many experiments are too time consuming and expensive for a journal to reproduced. Some studies take many years.

  • by shaitand ( 626655 ) on Thursday October 17, 2013 @06:13PM (#45158915) Journal
    All researchers in the sciences have a motivation to be published, in the form of recognition, academic progress, and financial motivation. Not many of them have an incentive stop working on looking great for producing results and check the work of someone else.
    • by Alef ( 605149 )

      But you gain recognition and get published if you prove someone else wrong. And your academic progress is hampered if someone shows your results to be flawed. I think you are ignoring the competitive element.

      That said, there is a problem with the current trend of grants being based strongly on the number of published papers, as it waters down the content of each paper and gets in the way of basic, long term research where there is no guarantee for "quarterly research results".

      • by shaitand ( 626655 ) on Thursday October 17, 2013 @07:34PM (#45159633) Journal
        "But you gain recognition and get published if you prove someone else wrong."

        But you get no funding from it and potentially make an enemy who now DOES have reason to scrutinize and point out your every mistake. If you aren't accomplished enough yourself your failure to replicate something isn't likely to even be published. And it isn't to the same degree. This is Dr. So-and-So, the man who did that brilliant work and discovered x,y,z impressive sounding thing vs This is Dr. So-and-So, he's never actually accomplished anything but he did a great job of failing to replicate his peer's results.

        You'd be better off in the long run pretending to replicate or even expand on the results of your peers. It isn't like they are ever going to call you out on it, you've made an ally AND made it much more difficult for either of your reputations to be harmed by a third party regardless of their claims.

        "And your academic progress is hampered if someone shows your results to be flawed."

        Yeah, but apparently it's not likely and you can select areas of study to minimize the probability. Even if someone fails to replicate your results it isn't proof that you faked them.

        "I think you are ignoring the competitive element."

        I don't think so. Most people are probably working from the assumption that work accepted and that has passed peer review is most likely legitimate. Why spend all that time and effort in hopes someone else in wrong? And in a way you can prove? Even if you suspect they faked something, that just means you are likely to be able to get away with it too and as stated above there is more glory down that path.

        It's no different than essays and other academic papers. You are required to provide references to support your assertions and credit sources but everyone knows the professor doesn't actually have time to read them. So people find credible and uncontroversial sources on topics that could well be saying something that could support their assertions. On the slim chance you were caught in it, you'd just find something you accidentally misinterpreted and be a little cautious for the next couple. And that's if you had to say anything, the professor is far more likely to assume (s)he has better comprehension of the topic than the student and add a note to educate the poor fledgling, they might not even reduce the grade over it depending on the topic.
  • Money (Score:5, Insightful)

    by jasnw ( 1913892 ) on Thursday October 17, 2013 @06:14PM (#45158935)

    Assuming TFA's numbers are correct, I'd bet that much of the problem is that no agency, be it government or commercial (and particularly commercial) wants to spend it's money seeing if published results are reproducible. Additionally, no one ever won a Noble Prize for excellence in reproducing others' results. Verification of results is key to science, but this is one of several aspects of doing science right that the funding agencies either don't want to, or can't (as in Congress looking over the shoulders of managers at the NSF), pay for. Everyone wants "everything, all the time" without paying for it, and this is the sort of thing that happens when decisions are driven by the money people (who may be scientists, to be fair) and not the people who know what the hell is going on.

    • Re:Money (Score:5, Interesting)

      by blueg3 ( 192743 ) on Thursday October 17, 2013 @06:37PM (#45159179)

      This is spot on.

      It may be true that we spend too much time doing the initial work and not replicating results. That's not what the article shows, though.

      It conflates the reproducibility rate of publications with some idea of "trust" versus "verification". There's no evidence (presented) that this means that scientists believe what is published. The author seems to think that papers should be verified before they're published, but that's not the point of scientific publication. The publication reports what the authors did and what their results are. It is nothing stronger: it does not represent (despite authors' bombastic claims) that what they found is actually hard scientific fact. That's only accepted (in theory) when those results are reproduced. Papers about reproducing the experiments are (in theory) also published, so that a critical scientist can evaluate the body of literature about how a hypothetical scientific fact has been tested. For this reason, the first publication of some new potential fact is naturally before anyone has verified it.

      Without some evidence that paper results are being widely accepted into the "scientific canon" without verification, this is just an author being confused about science. That's a bit fair, though, because the press tends to focus on first publications (they're more interesting) and reports them as if they are fact. A scientist knows better, but the public at large generally does not. It's very disingenuous of the press -- but it sells.

      In fact, the only evidence presented sounds like the process works just fine. A first publication of a new thing in biotech is a potential huge advancement and gold mine. Investors, scientists, and engineers all seem to know that the rate of the first publication actually being something as opposed to spurious is low, so the first thing they do apparently is try to verify it and make sure it's really a thing. That's pretty much what you want to happen.

  • People just cant accept that the fact that an experiment (which in they have invested time and money) can produce no conclusion even though from the point of view of science this is completely valid result. In today's goal oriented world, people want every experiment to mean something; to prove/disprove some hypothesis. And so they seem to find patterns in things which dont have any.
    • by HiThere ( 15173 )

      More to the point, negative results have a very hard time getting published, even though they are nearly as valuable as positive results.

      Also, replication of a result doesn't usually bring much in the way of kudos, even though it's an essential part of the scientific method. I won't say that a replicated result has a hard time getting published, but I will say it has a hard time of getting many inches of print.

  • If you can patent any kind of harebrained crap someone came up with in a pipe dream without having to provide a working model, that's what you get.

    As soon as someone dreams up something that might kinda-sorta work in some sorta way, he will rush to the patent office. Should someone else finally come up with a working model, he'll rip the real inventor off with it.

  • The US spends too much time dithering on "proving" new discoveries and processes before taking useful, competitive actions. There is period between discovery and generally agreed development, before extensive verifications that used to be a tremendous competitive advantage for successful companies in the US. You're first, making billions with something cheaper, faster and better, while the competition's politico-bs "proovers" enjoy their sinecure 10-20-30 years. Now the proovers have everything stopped
  • by Anonymous Coward on Thursday October 17, 2013 @06:21PM (#45159005)

    A big part of the problem is null results. Getting and reporting a null result is SUPPOSED to be good science. And in a lot of areas of science, its OK - you'd obviously prefer to find something cool, but you don't kill your career by not finding something. But in some fields, if you do a study that doesn't find something, you can literally set your career back a decade or. Guess what this leads to?

    I was talking to somebody was getting her PhD in Biochem. She was in the midst of a 5 year study on the effects of some drug. A condition to get her PhD was that she must publish a "substantial" peer review result. And her department had gone out of their way to define null results as not substantial. This meant that if her study found that the drug wasn't effective, she didn't get her PhD - she would have literally had to start over and had wasted 5 years of her life.

    This is common in some areas of science, but not others.

    So the first thing to do is get rid of garbage policies like this. My understanding is that its much more common in biology related fields, but that might just be my bias (I'm from a physics background, so I have an admitted bias here).

    Until you fix this policies like this, you will always have people getting "creative" with their statistics or just outright making up data. For some reason, a lot of these biology related fields don't seem to care about policies like this, which I just don't understand. I mean, we know that these policies lead to bad behavior, but nothing is done to fix it. Maybe somebody in these areas can explain the rational to me.

    • Re: (Score:3, Funny)

      by John Allsup ( 987 )
      That's why if you look through, say the TRIP database of medical studies, what you see is synopses of what look to be 'it really works, honest!' stories reminiscent of fake Amazon reviews, and little in the way of 'research downers' to make it look legit.  The kind of success rate you see in these research papers is like the poll outcomes you get in stuffed ballot box elections in the middle of some countries whose dictators don't quite understand how democracy is supposed to work.
  • Lord Forgive me, but (Score:5, Interesting)

    by Ol Olsoc ( 1175323 ) on Thursday October 17, 2013 @06:22PM (#45159033)
    I read TFA.

    Really everyone should. Because while some of the points are good, The Economist misses the biggest one of all

    In the University environment of today, the scientists and researchers are hamstrung by Non-Disclosure agreements. How does one share experimental information when to do so will cause you and your University great problems? One of the biggest offenders is the Biotech industry. Talk to someone, lose your funding and probably your job.

    This is just the culmination of the past several decades shift from Government sponsored research to industry dominated research. It's a completely understandable position - industry wants return on it's investment, and research that doesn't generate profit might be good research, might be groundbreaking, but to the industry sponsoring the research it is a failure if they don't profit from it.

    I'm pretty certain that industry would consider completely flawed and incorrect research as successful if it generated money for the company sponsoring the research.

    So they draw the conclusion that scientists are lazy. I draw the conclusion that this is what happens when making money is the most important factor, and the scientists are bound by their contracts.

    • by Vesvvi ( 1501135 )

      Scientists and researchers are not hamstrung by NDAs. If anything things are going the other direction: university libraries are setting up self-publishing, open-access projects to disseminate the work being conducted by the researchers.

      I've only seen NDAs and similar come up in one situation: when a researcher employed by the university is a guest or collaborator with a private company. Then the company might try to introduce such things, but the university legal is very hostile to that. I can't think o

  • by fisted ( 2295862 ) on Thursday October 17, 2013 @06:24PM (#45159043)
    trust, but verify?

    We must be talking about different sorts of science, because from what i know, the simple idea rather is
    "be objective, and be sure to keep it falsifiable
    • by HiThere ( 15173 )

      You are talking about different parts of the process. The person performing the experiment needs to be objective and keep it falsifiable. The person reading a report of the experiment should "trust but verify". (I'm not real sure about that "trust" word in there. It sort of depends on how unreasonable the finding is, and what the source is that claims it.. And those that trust least will be most inclined to verify.) Remember, though, that we are talking about a population of scientists (i.e., all thos

  • One of the key problems is that doing the replicative experiments and publishing that data, as well as any divergence from the initial data of the first study by another author, is hard to get published in peer-reviewed scientific journals, as are replicative studies where the results do not concur with the original study.

    It's also hard to get funding for this.

    Fix that and you fix the observed problems, which mostly crop up in certain scientific cultures that tend not to encourage juniors from challenging s

  • That there is a name to be made in debunking landmark studies in the biosciences because they cannot be reproduced. That should be part of the scientific process.

  • by phantomfive ( 622387 ) on Thursday October 17, 2013 @06:27PM (#45159077) Journal
    Richard Feynman pointed out something similar in his Cargo Cult speech [columbia.edu]. Here is an excerpt:

    When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this--it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

    I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person--to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

    She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.

    Nowadays, there's a certain danger of the same thing happening, even in the famous field of physics.

  • Too much funding is too dependent on satisfying funding bodies' requirements for the quantity of published research.  Failure to publish jeopardises one's research career.  But funding bodies do not have the resources or expertise to verify the correctness of publication, and nor do the referees.  It doesn't take a rocket scientists to work out what happens next.
  • Is that many experimental results are too resource intensive for most people to replicate.  Just try replicating the findings that support the existence of Higg's Boson.
    • And for that matter, a mathematical result whose informal proof in a published paper relies on ten other informal proofs in ten other papers, who in turn rely on others, etc. until to make sure you have to check through tens of thousands of pages of informal proofs and verify that everything can be made logically rigorous on suitable foundations.
    • by Vesvvi ( 1501135 ) on Thursday October 17, 2013 @08:15PM (#45159895)

      It's resource intensive, but also just plain difficult. For example, publications are never a full description of an experiment, just the highlights. It takes a skilled researcher to fill in the gaps and then a second level of skill to accurately carry it out.

      Looking at it from another perspective, ignoring scientific developments which are the result of inspired genius (which I would argue are rare), every new publication is the more novel and difficult work that has been conducted to date. If it weren't, it would have been done already.

      So how can you expect someone else (who wasn't able or interested to carry out the work themselves) to immediately duplicate cutting-edge work based on an incomplete description?. It's a bit amazing that up to 50% of publications could be replicated at all.

  • When results cannot be trusted, all "science" loses credibility.
  • Scientists go wrong...not science.

    Sheesh!

  • How Big Pharma 'Science' Goes Wrong

    FTFY.

  • by hackus ( 159037 ) on Thursday October 17, 2013 @07:52PM (#45159755) Homepage

    You NEVER trust science.

    You always verify trust has nothing to do with it.

    That is the problem with our doctrine based science programs and institutions.

    It is all crappy trust science, and it is rarely verified.

    So if you want to do great science, don't believe anything and trust nobody.

    Find out for yourself.

    But I think that is just common sense.

    -Hack

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...