Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Positive Bias Could Erode Public Trust In Science 408

ananyo writes "Evidence is mounting that research is riddled with positive bias. Left unchecked, the problem could erode public trust, argues Dan Sarewitz, a science policy expert, in a comment piece in Nature. The piece cites a number of findings, including a 2005 paper by John Ioannidis that was one of the first to bring the problem to light ('Why Most Published Research Findings Are False'). More recently, researchers at Amgen were able to confirm the results of only six of 53 'landmark studies' in preclinical cancer research (interesting comments on publishing methodology). While the problem has been most evident in biomedical research, Sarewitz argues that systematic error is now prevalent in 'any field that seeks to predict the behavior of complex systems — economics, ecology, environmental science, epidemiology and so on.' 'Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves,' he adds. Do Slashdot readers perceive positive bias to be a problem? And if so, what practical steps can be taken to put things right?"
This discussion has been archived. No new comments can be posted.

Positive Bias Could Erode Public Trust In Science

Comments Filter:
  • by Karmashock ( 2415832 ) on Friday May 11, 2012 @09:12AM (#39965681)

    They're overstating precision. Which rather then a forgivable error is an elementary mistake no trained scientist should ever make.

    A VERY basic concept they teach at the lowest level of science education is the distinction between accuracy and precision. This is science 101.

    Accuracy is whether or not a given conclusion is correct.
    Precision is to the degree of specificity.

    Typically you run into problems on complex subjects because they overstate the precision of their data or their ability analyze the data.

    This can boil down to simple thinks like significant digits.

    For example, I'm measuring volume to two significant digits in a giant data set with thousands of measurements. When and if I average those numbers the final average can't have more then two significant digits. That sounds elementary but you see this error made on some big studies. You'll have a situation where something is being measured in a crude sense by many sources and then in the analysis a much higher degree of specificity is implied.

    Often that degree of specificity is required to make certain conclusions which is why they break the rule. This is lazy and a breach of scientific ethics. What they need to do is collect the data all over again this time to the level of specificity they need.

    Simply saying its too hard to collect the data properly so they're going to make assumptions is not reasonable or ethical. I suppose you could do it so long as you kept an asterisk next to the data and the findings to make it very clear throughout that the conclusion is a guess and not in any way empirical science since at some point people were guesstimating results.

  • Re:Wait, what? (Score:3, Informative)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Friday May 11, 2012 @09:26AM (#39965851)

    The trouble is that most such "X causes cancer" statements come from the media themselves, recklessly shortening research results saying "consumption of X correlates with a positive increase in cancer", where the nature of the correlation is unknown and the increase is very small. So it is all to often not an accurate prediction of the research. Particularly, there can often be a chinese whispers effect, where the researcher publishes a paper, the University PR department publishes a precis edited for PR purposes, a popular science journal then reports with its own bias, and it is then taken by mainstream media and truncated again.

    A particular example, as often reported b y Ben Goldacre, is the Daily Mail, which seems to summarise everything into causing or curing cancer, and will report the same substance on both sides within days.

  • by ceoyoyo ( 59147 ) on Friday May 11, 2012 @09:50AM (#39966139)

    "When and if I average those numbers the final average can't have more then two significant digits."

    Yes, the average can have more precision than the individual measurements. That's actually kind of the point of an average. It can't improve accuracy though, for the most common definitions of accuracy.

    Your definitions of accuracy and precision are sort of right, but also sort of misleading. And the way you use specificity is incorrect. But as you correctly point out, a lot of working scientists are a bit fuzzy on all these concepts too.

  • by next_ghost ( 1868792 ) on Friday May 11, 2012 @10:15AM (#39966463)

    Positive bias works a little differently than you think. First, a few definitions. Positive result looks like this: "We tried X on Y and it works." Negative result looks like this: "We tried X on Y and it doesn't work." Which one looks more interesting? That's right, the positive one. Now remember that outside mathematics, science is stuck with probabilities. There's always a very small chance of false positives (and also false negatives, but those are less of a problem).

    So let's suppose that we have an experiment which should come out negative with 99% certainty. There's 1% chance of false positive. Now let's have 100 scientists independently perform the experiment. Chances are that 1 scientist will get a false positive. This scientist will then publish a paper while the other 99 will give up and research something else without publishing because of the impression that negative results are not interesting.

    This is how positive bias works. Those 99 negative outcomes need to be reported to show that the one false positive is a false positive but they aren't reported. It's not some kind of intentional fraud but merely a consequence of overt obsession with original and sensational results dictated by those who pay for research. It doesn't matter what the research actually means, only that it can be phrased as "We tried X on Y and it works."

  • Re:Wait, what? (Score:5, Informative)

    by ArcherB ( 796902 ) on Friday May 11, 2012 @10:22AM (#39966549) Journal

    You should thank God that FoxNews exists

    Fox is the only "news" organization, ever, that went to court in the US to prove that they don't have to report news.

    You are a complete and utter moron if you give whatever they say any credibility at all. This is not ad hominem. This is their own claim that whatever they report does not have to be based on any facts at all. None. Zero. Nada. They can make shit up out of whole cloth if they want. They have a Ruling saying they can.

    --
    BMO

    How many news organizations report false stories? Here's a hint; ALL OF THEM. It happens. We are human. It's USUALLY a mistake.

    The difference is when FoxNews makes a mistake, they are sued by someone like yourself, only with extra money laying around, trying to silence the opposition and run FoxNews out of business. Did anyone sue to strip the license of NBC when they released a modified 911 call in the Trayvon Martin shooting? Did CBSNews have to go to court to defend their right to report stories that turn out to be false after that blatantly false Bush National Guard story?

    Why is FoxNews different?

    Also, the court case you are referring to was not about FoxNews, but a local Fox affiliate. The local affiliate was sued by a former reporter who didn't like the fact that they were made to include Monsanto's side of the story, which they claimed to be false. The part you are referring to where the court said the news organizations were allowed to lie did not come from FoxNews or the court, but a group called Project Censored, which from what I can tell is an anti-government, anti-media, anti-corporation organization. Your kinda people

  • Re:Wait, what? (Score:5, Informative)

    by jo_ham ( 604554 ) <joham999@noSpaM.gmail.com> on Friday May 11, 2012 @10:38AM (#39966815)

    "Scientists" are not forcing anyone to do anything - that's what politicians do. We just do the science. Now, if we end up discovering, for example, that CFCs are damaging the ozone layer, or that PbEt4 in gasoline is poisoning the air then we bring that to the attention of people who can make a decision on what to do about that. We can certainly offer suggestions of what to do about it - stop using CFCs (and develop alternatives) and stop using PbEt4 in gasoline (and work on alternatives) etc, but ultimately sometimes changes have to be made that people aren't going to like.

    We don't enjoy "telling people what to do" or set about to discover things that will allow us to dictate to people what to do - ideally we want to find solutions to things that have a minimal impact on people's lives.

    We're not in this to control people - we just do science. We don't want to "force" people to do anything, but sometimes what we discover necessitates change in order to benefit everyone as a whole (like removing lead from gasoline, or switching to non-CFC propellants and refrigerants). That's just life I'm afraid.

  • by Belial6 ( 794905 ) on Friday May 11, 2012 @11:40AM (#39967529)
    Bad "journalism" started WAY before blogs.
  • by Anonymous Coward on Friday May 11, 2012 @11:49AM (#39967681)

    To me it seems a perfect example of how NON SCIENTISTS have made the expectations that scientists are not able to live up to.

    The report was basically "Hey, guys, this looks odd, but we've corrected for everything we could think of. You may want to see if you can replicate this".

    Nobody else managed to replicate.

    MEANWHILE the original posters were being told how they had it wrong. They changed their process to accord for this and retained some extra difference, despite this. Another attempt by another team showed no effect, so the original scientists CONTINUED to see what could be the cause if not FTL neutrinos.

    They then found that the 50ns difference can be accounted for if the connectors were not tight.

    NOT, at that time, that this was the cause of the discrepancy, but that

    a) the couplings were loose NOW, but they didn't check THEN, could have loosened since then
    b) an effect of this is a slight delay, enough to remove the difference they'd seen

    All absolutely fine and no "positive bias" AT ALL.

    But what is the public saying about it? What were the press saying about it?

    THAT debacle (and how it's ignored by the parties involved in it) is a perfect example of the problems science has with people who aren't scientists.

If you want to put yourself on the map, publish your own map.

Working...