Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Positive Bias Could Erode Public Trust In Science 408

ananyo writes "Evidence is mounting that research is riddled with positive bias. Left unchecked, the problem could erode public trust, argues Dan Sarewitz, a science policy expert, in a comment piece in Nature. The piece cites a number of findings, including a 2005 paper by John Ioannidis that was one of the first to bring the problem to light ('Why Most Published Research Findings Are False'). More recently, researchers at Amgen were able to confirm the results of only six of 53 'landmark studies' in preclinical cancer research (interesting comments on publishing methodology). While the problem has been most evident in biomedical research, Sarewitz argues that systematic error is now prevalent in 'any field that seeks to predict the behavior of complex systems — economics, ecology, environmental science, epidemiology and so on.' 'Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves,' he adds. Do Slashdot readers perceive positive bias to be a problem? And if so, what practical steps can be taken to put things right?"
This discussion has been archived. No new comments can be posted.

Positive Bias Could Erode Public Trust In Science

Comments Filter:
  • by Anonymous Coward on Friday May 11, 2012 @08:57AM (#39965525)

    Positive Bias is another word for Group Think. I guess it could also mean deception [bishop-hill.net]

  • by atlasdropperofworlds ( 888683 ) on Friday May 11, 2012 @08:58AM (#39965539)

    There are "studies", and then there is observation, modelling, prediction, model testing which is this thing called science. "Studies" are bullshit. Scientific research functions as it should. I believe the OP's article is just a chunck of sensationalist BS, or utterly ignorant of what science is (and is not).

  • by SirGarlon ( 845873 ) on Friday May 11, 2012 @09:29AM (#39965911)

    This is a dilemma that is really goes to the heart of the philosophy of government. If the majority is irrational, is it better to give them self-determination and accept they will make frequent bad decision, or have the enlightened few rule them and impose better-informed decisions upon them?

    Hint: there is no correct answer. I am not an historian but as far as I know this debate between a pure democracy and some form of republic goes back to Rome and Greece.

    What is kind of weird is that the two major parties in America have developed into philosophies that are kind of opposite their names: the Democrats favor the paternalistic nanny state governed by the enlightened few (what I would call a "republic"), and the Republicans favor the ignorant mob ("democracy").

    As an aside, when America was a young nation many of her leaders advocated public education as a way to narrow the gap between the elite and the general population. That does not seem to be working out real well, though.

  • Re: (Score:3, Interesting)

    by Reibisch ( 1261448 ) on Friday May 11, 2012 @09:30AM (#39965915)

    And ignores the fact that once published, there's no reliable corrective mechanism to propagate those results down beyond a standard literature search. I'm posting as AC because quite a few years ago I published results that I believed at the time to be correct, but were shown to be wrong in a subsequent paper. Despite this, I'm *still* being cited in new papers while the paper that refuted mine is seldom cited. Science isn't some infallible field. We make mistakes; Sometimes those mistakes are accidental, sometimes they're sloppy, and yes, sometimes they're even intentional. That doesn't reduce the validity of science, but it requires us to be more vigilant.

  • by Sique ( 173459 ) on Friday May 11, 2012 @09:37AM (#39965987) Homepage

    About every living creates the environment it lives in. Some in very obvious ways (ants, bees, beavers, moles...), some pretty subtle (grazing animals of the open plains tend to hinder the growing of shrubs and trees and thus keep the plain open). Same goes for plants, which change the immediate environment in a way to hinder concurrent species by shadowing the ground, changing water levels and chemical properties of the soil, and fend off enemies.

  • by fearofcarpet ( 654438 ) on Friday May 11, 2012 @09:38AM (#39966007)

    There are "studies", and then there is observation, modelling, prediction, model testing which is this thing called science. "Studies" are bullshit. Scientific research functions as it should. I believe the OP's article is just a chunck of sensationalist BS, or utterly ignorant of what science is (and is not).

    That is not really what TFA is talking about. Daniel Sarewitz is re-phrasing a long-known problem with "studies," as you call them, which is that complex systems are--by definition--too complex to study as a whole. I am a physical scientist, which means that I typically make or measure something in a well-controlled experiment and then change variables in order to test a hypothesis. I can basically publish a paper that says "we tried really, really hard to find it, but it wasn't there." In the life sciences, they are trying to answer vague cause-effect questions like "does this drug affect a particular type of tumor more than a placebo." Thus researchers in those fields have to create models in which they can control variables. He gives the example of mouse models, which are obviously imperfect models for human physiology. How imperfect is the question. The creeping phenomenon that he is addressing is the tendency to relax the standards for what counts as positive evidence--and I'm grossly oversimplifying--by waving your hands around about how mouse models are imperfect, but that there is definitely "a statistically significant trend." The root cause is simply the ridiculous amount of pressure that life science researchers are under to publish, which requires results, because their methodology is standardized. Those poor bastards can spend eight years on a PhD project that goes nowhere or burn four years of their tenure clock figuring out that their experimental design was flawed. *Poof* no funding, no tenure, no degree, time to consider a new career. That sort of potential downside creates the sort of forced-optimism that TFA describes.

  • by elsurexiste ( 1758620 ) on Friday May 11, 2012 @09:44AM (#39966067) Journal

    I remember a few days ago someone submitted a story about piracy for "The Avengers" being low compared to potential profits from them. A few high-ranked comments were like "This is yet another proof that [insert common /. parlance here]". I saw very few comments that stated the most plausible reason: a camcorded action film, with crappy audio and a shaking image, can't compete against the real thing. I thought the same thing: confirmation bias.

    People do it all the time. If something can somehow support their views (specially if they don't RTFA) they'll use it as yet more confirmation. "I still don't get why this piece of evidence is discarded by everyone else! They must be delusional or have bad intentions". For example, I imagine this article will be used as evidence for: lack of funding, falling standards in the US, the demise of education, lack of scientific reasoning (maybe they'll even extend it to scientists themselves), and other common /. utterances. I wonder how many of them will actually say what I found out after RTFA...

    So, everyone is playing the same game, and scientists are no exception. But hey, that study has numbers on it. At least you can try to replicate the findings, if only the entry barrier wasn't so high: these tests are *hugely* expensive. More collaboration may be a good idea. Shared laurels are better than none, right?

    P.S., a nice article on confirmation bias (and other goodies) here [youarenotsosmart.com].

  • by ceoyoyo ( 59147 ) on Friday May 11, 2012 @09:44AM (#39966069)

    "also takes into account whether they are consequential or not"

    That makes the problem worse. We need to evaluate papers based on whether they are good science or not, and not publish the ones that are bad science. Currently the "negative results" which are actually inconclusive results, are not published because they are, well, inconclusive. Unfortunately a lot of the "positive" results are also inconclusive, but they ARE published. The solution is not to publish more "negative" results, it's to stop publishing the flawed "positive" ones.

  • Re:data point (Score:4, Interesting)

    by Hatta ( 162192 ) on Friday May 11, 2012 @09:44AM (#39966077) Journal

    Which is exactly what we need to do. We need an entire independent arm of the sciences dedicated to confirming established results. This is something that needs to occur separate from the pressure to come up with novel results to please grant reviewers.

  • by BobMcD ( 601576 ) on Friday May 11, 2012 @09:50AM (#39966141)

    Saying 'science is a method' is like saying 'Christ was a Jew'. True, but it doesn't change what happened.

    Science was an idea designed to seek empirical truth. To find things in such a way that those who followed after could find them again. Then people got a hold of it and started using it as a means to control one another.

    Christ (even from the atheist point of view, so bear with me) had a simple message of love being service to your fellow man. Then people got a hold of it and we get monstrosities like the Crusades.

    That's where the 'HAS become' part of the above phrase kicks in...

  • Re:data point (Score:4, Interesting)

    by fearofcarpet ( 654438 ) on Friday May 11, 2012 @09:51AM (#39966153)

    I spent some time in the Chemistry department at a fancy ivy leave school that seemed designed to crush the spirits of biologists. While we were off drinking beer and playing volleyball against the Physics department, they were toiling night and day on multi-year projects that may or may not generate publishable results. The sick thing is that the outcome was totally decoupled from the abilities of the researchers--it was more like a test of stamina and luck. I saw talented, brilliant people turn into bitter husks. Some gave up on research and wound up teaching at private colleges, others left science completely--and not into fields that used any of their expertise. It seems like a lottery where those that get lucky (or stab the most backs) go on to academic positions at top-tens and the rest end up in rocking back and forth in the corner mumbling to themselves all day.

  • by silentcoder ( 1241496 ) on Friday May 11, 2012 @10:02AM (#39966277)

    There's a brilliant line on this in "The science of discworld" - I won't pretend I can quote it 100% accurately off the top of my head but it goes something like this:
    "In the media you will often read that a certain scientist is trying to prove a theory. Maybe it's because journalists are trained in journalism and don't know how science works or maybe it's because journalists are trained in journalism and don't care how science works - but a good scientist never tries to prove her theory, a good scientist tries her best to disprove her theory before somebody else does it for her, failing to disprove it is what makes a theory trustworthy."

  • by radtea ( 464814 ) on Friday May 11, 2012 @10:50AM (#39966981)

    During most of my career in pure physics I got negative results, which were always hell to publish. One of the things you'd hear a lot was, "Don't worry, a negative result is just as good as a positive result!"

    At some point I started telling friends and colleagues who got positive results, "Don't worry, a positive result is just as good as a negative result!" Which is false, as proven by the fact that no one but me ever said it.

    Negative results are hard to get published, but far more common than positive results. Furthermore, on the road to any positive result there are going to be lots of negatives: even today, working in an area where true positives are much more common, I try to put a section in every paper entitled something like "Things That Did Not Work So Well", because any experiment or computation or theory is likely to involve some dead ends that seemed like a good idea at the time, and if scientists don't report on them they will continue to seem like good ideas to people who haven't tried them, who will then waste effort on trying them, and fail to publish them when they don't work...

  • by jpate ( 1356395 ) on Friday May 11, 2012 @10:56AM (#39967049) Homepage

    This is how positive bias works. Those 99 negative outcomes need to be reported to show that the one false positive is a false positive but they aren't reported.

    I just want to point out that it's not quite so straightforward. A null result for an effect is not, in and of itself, negative evidence for that effect, it's just a lack of evidence for that effect. It's always possible that a different set of materials, a larger sample size, an additional control, more sophisticated stats, or any number of methodological modifications would succeed in finding an effect. 99 null results with bad materials are not evidence against even a small number (not one!) of positive results with good materials. Null results are under-reported because they are much more ambiguous, not (only) because they are harder to sensationalize.

  • by slew ( 2918 ) on Friday May 11, 2012 @11:07AM (#39967173)

    I think the real problem today, is that jounalists think they can be stars out of the gate. In the old days, you started out as a fact-checker, before you could get even get a word that you wrote published (usually anonymously at first). By the time you got a "by-line" you have come up through the trenches and seen all of the behind the scenes mistakes that the "star" writers made. Today you have a blog and no editor. Not only quality journalism goes out the window, but these new so-called journalists don't learn the consequences for some of their habits, because they haven't seen others make them (and haven't been motivated like a fact checker to find the problems).

    Of course this "star-at-birth" issue isn't just problem with journalists, but many professions (e.g., scientists, chefs, programmers, etc)...

  • by matthewv789 ( 1803086 ) on Friday May 11, 2012 @11:35AM (#39967469)

    Yup, that's the crux of the problem. While it may be true, as others say below, that publication bias against negative results occurs in all fields (such as physics) regardless of study funding, what we are seeing now is the influence of pharmaceutical industry funding in the clinical trials used for FDA approval of drugs (that is, a company funding the trial of its own drug).

    Specifically, drug studies funded by pharmaceutical companies are four [newscientist.com] times [healio.com] more likely [latimes.com] to show a positive benefit than ones funded by neutral sources. This is a problem because nearly two-thirds of clinical trials used for FDA approval are now industry-funded.

  • by Belial6 ( 794905 ) on Friday May 11, 2012 @12:09PM (#39968007)
    The 'facts are open to interpretation seems to run like this: 1) Statements are either facts or opinions. 2) Facts are statements that are correct. 3) Thus all other statements are opinions. 4) Opinions can't be 'wrong' because they are not facts, and are inherently subjective. 5) Thus all Opinions are correct. 6) Statements that are correct are Facts. 7) Opinions are facts. This leads to: 1) Red is blue. 2) Red is blue is incorrect. 3) Since red is not blue, "Red is blue" is an opinion. 4) "Red is blue" can't be wrong because it is an opinion. 5) Thus "Red is blue" is correct. 6) Since "Red is blue" is correct, it is a fact. 7) Red is blue is a correct fact.

"An organization dries up if you don't challenge it with growth." -- Mark Shepherd, former President and CEO of Texas Instruments

Working...