Forgot your password?
typodupeerror
Medicine The Media Science

Scientists Themselves Play Large Role In Bad Reporting 114

Posted by samzenpus
from the but-I-have-people-skills dept.
Hugh Pickens writes "A lot of science reporting is sensationalized nonsense, but are journalists, as a whole, really that bad at their jobs? Christie Wilcox reports that a team of French scientists have examined the language used in press releases for medical studies and found it was the scientists and their press offices that were largely to blame. As expected, they found that the media's portrayal of results was often sensationalistic. More than half of the news items they examined contained spin. But, while the researchers found a lot of over-reporting, they concluded that most of it was 'probably related to the presence of ''spin'' in conclusions of the scientific article's abstract.' It turns out that 47% of the press releases contained spin. Even more importantly, of the studies they examined, 40% of the study abstracts or conclusions did, too. When the study itself didn't contain spin to begin with, only 17% of the news items were sensationalistic, and of those, 3/4 got their hype from the press release. 'In the journal articles themselves, they found that authors spun their own results a variety of ways,' writes Wilcox. 'Most didn't acknowledge that their results were not significant or chose to focus on smaller, significant findings instead of overall non-significant ones in their abstracts and conclusions, though some contained outright inappropriate interpretations of their data.'"
This discussion has been archived. No new comments can be posted.

Scientists Themselves Play Large Role In Bad Reporting

Comments Filter:
  • by RevWaldo (1186281) on Thursday September 13, 2012 @08:15AM (#41322189)
    The article was on quantum mechanics fer chrissakes!

    .
    • by Livius (318358)

      Let's face it, even the scientists don't know what quantum spin is all about.

      • The ones who know maths do, but they can't describe it to the ones who don't in any other way.
      • "Let's face it, even the scientists don't know what quantum spin is all about."

        MRI scans [wikipedia.org] couldn't exist without a thorough understanding of what quantum spin states are. Ditto for NMR spectroscopy [wikipedia.org].
    • It's only Natural (Score:4, Insightful)

      by happy_place (632005) on Thursday September 13, 2012 @08:40AM (#41322351) Homepage
      There are a number of reasons scientists spin their work.

      1. Science is quite boring. By nature it's supposed to be, objective, logical, and devoid of feelings. But Scientists themselves are not typically boring people, they're humans, and humans are emotional beings.

      2. Scientists aren't communications experts and suck at making dry discipline accessible to the public. Never was this more obvious than when I was in college. How many brilliant researchers really sucked at teaching? Pretty much most of them.

      3. Scientists want to think their work matters, and therefore are inclined to extrapolate applications of their science to the public. When those applications get reported as a sure thing, then an exaggeration is bound to happen.

      4. And of course, Science that can be show to be of great public benefit gets funding. Cha-ching!

      • by jhoegl (638955) on Thursday September 13, 2012 @08:49AM (#41322423)
        3 and 4 are the main reasons, 1 is subjective and 2 is outright wrong. If 2 were correct they wouldnt know how to spin things in such a way as to hide the results the way they do.
        3 and 4 are the most concerning, as that is what peer review is for and that is where there is failure due to the large volumes of data vs time.
        So, it is abused by those that just want money to do stupid things.
        • by Anonymous Coward

          I suspect that peer review actually encourages exaggeration of importance. After all, the reviewers are in the same field as the study author, and therefore inclined to believe that the niche is larger than it is. In my experience, reviewers have a much more positive response to a study that claims to have profound implications for [x] than to a study that reports findings and interprets them in the context of prior work. Reviewers are also under funding pressure, and the more important their niche is, t

        • by Anonymous Coward

          While 4 is a problem, you have to consider the audience the article was written for. If the article is a scientific journal, which is what most scientist are writing articles for, then the spin is readily and easily recognized by other researchers as a kind of forecasting where this research might lead, rather than solid statements about the current work. In this case it may appear to be spin, but the intended audience knows what it is. On the other hand, if the article is one for the general public, the

        • And weighing against the will to cheat is the fact that most scientists are honest people who want to advance our knowledge. You generally don't become a scientist if you are just out to make money by any unethical means possible. If you're okay with lying in order to get fame and fortune, you are probably a lawyer, politician, salesperson, or executive. You might start off honest and then change, of course.

          There's also the fact that few scientists are in a position to lie about their results and not
      • by rtaylor (70602) on Thursday September 13, 2012 @08:51AM (#41322435) Homepage

        5. It's possible that scientists which include spin and get good news coverage receive additional funding the next year. Those who don't may not, and eventually end up an assistant to someone who does spin.

        No idea if the above is true but if our carrot/stick system is setup this way but if it is then spin is guaranteed.

        • They're talking about paper abstracts. These are (usually) the first thing that the reviewer reads and set their frame of mind for the rest of the paper. A good (meaning interesting, not necessarily accurate) abstract means a higher probability of the paper being accepted. It also means that it is more likely to be cited when people are thinking 'I need to cite a paper about this, but I'm not going to reread them all to work out which one makes the most sense here'. Both having papers accepted and havin
        • There is a lot of good science that sounds boring to the public, and more importantly to the funding agencies, so the investigators try to make it sound more exciting.

          An interesting physics question about how a wave function collapses when a measurement is made becomes "quantum teleportation". Using X-ray pulses to saturated a L-shell (M?) transition in Aluminum becomes "transparent aluminum". A new technique for measuring fast chemical reactions on a surface becomes a 'breakthrough for hydrogen power".

      • by crazyjj (2598719) *

        4. And of course, Science that can be show to be of great public benefit gets funding. Cha-ching!

        I would say number 4 should be at the top of the list--in 30pt. font and flashing bright red.

        Science is supposed to be objective, above such matters as grant-whoring and self-promotion. But if such a creature actually exists, I've never met it myself.

      • by poity (465672)

        Take evolution for example. I have rarely read or heard of scientists describing evolution in the most mundane but factually correct way -- the genetic change within a species resulting from natural selection*, a process that is merely the dying off of lineages that could not cope with the environmental conditions, or could not compete with other lineages -- except in textbooks. I do, however, remember guest scientists on documentaries and nature magazines wax poetically about a species' epic struggle of su

      • by quantaman (517394)

        Not having read the article one also has to consider that there's likely a selection bias at work.

        Say there are 10 studies for a journalist to choose from, 9 are all very cautious in their interpretation and communication, the remaining one plays up their result as much as possible. Which do you think is going to make the news?

        The problem with science and the news is that its news, stuff we already know isn't news, it's the new and surprising stuff that's news, unfortunately in science the new and surprisin

      • 5. (or 6. including rtaylors point): Journal editors and referees frequently don't read past the abstract of submitted articles. Therefore scientists frequently say something attention-grabbing in the abstract simply to induce the editor to read further. I guess they hope that once the editor realizes the abstract distorts the results, they might have found something else in the article that they like.
      • by daver00 (1336845)

        You missed the main point: Scientific research needs to be relevant in order to qualify for funding.

    • by hweimer (709734)

      That's nothing, my most recent paper has spin right in the title [doi.org]!

      • by mcgrew (92797) *

        I found that short abstract interesting, but reading it I think I know one reason why science reporting sucks -- most reporters can't read at that level. Hell, most reporters would have trouble with the average Wikipedia post about any facet of science, let alone a PhD level paper.

  • Just make it standard for science reporters or editors for the science section to ignore the abstract entirely.

    • But then they would have to read the paper itself! Good idea.
      • by Bryansix (761547)
        Considering most journalists just reword releases from AP or Reuters, I think getting them to read a 150 to 800 page scientific report is basically hopeless.
    • Read much? It's the press release not the abstract genius.

      And the problem is the reporter not the scientist. I have read countless news article headings that had little to do with what the article reported, and then even less that what the actual research paper stated. News is about getting eye balls, either through subscriptions for papers or ratings for news shows.

      Researchers do not need to gin up their research to gain funding. The agencies that provide funding are run by actual researchers th
      • First, fu for being a dick... no really... drink molten glass.

        Second, researchers don't need to exaggerate? Then why do they... frequently?

        This has been an ongoing and system wide issue in the halls of science.

        Do journalists exaggerate as well? Oh god yes... more often then not frankly. But it's less acceptable for scientists to do it. And in any case, my only suggestion was that the abstract was ignored and have it be uncitable as a source. So the paper proper can be cited. But the abstract cannot be cited

    • by Anonymous Coward
      The thing I don't get, and maybe it is field specific, is every time I come across some story that seems to defy the laws of physics or includes some blatant claim that makes the work sound like an instant Nobel prize or makes it sound like the work originated what is actually a whole, established field or work, I look up the abstract and see none of those claims are there. This doesn't seem like a memory bias thing for me, as I can't think of any counter examples, although there is probably a bias in whic
  • by Anonymous Coward on Thursday September 13, 2012 @08:15AM (#41322197)

    Whereas the mundane gets nothing. For every person murdered, or in a car accident, there are thousands in the area who had a humdrum day. For every house that burns down, thousands don't.

    People who hear about these bad things and think the world is going to heck, are forgetting that nobody cares to hear about nothing happening.

  • by Rogerborg (306625) on Thursday September 13, 2012 @08:19AM (#41322215) Homepage

    Fund science like you fund business, and it becomes an exercise in marketing and hot topic buzzwords.

    OK, it might take more energy to make a solar panel than we'll ever get back from it, but look at the economies of scale that we're leveraging!

    • by Anonymous Coward

      Let's face it, to be successful in one's lifetime in any field requires some sort of self-promotion. I'm sure having a well known name makes it a LOT easier to get funding, tenure, book deals, etc ...

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      As someone who is a Ph.D. student and research assistant, "whoring for funding" is pretty much SOP. It's pathetic and I hate it.

      • by oh_my_080980980 (773867) on Thursday September 13, 2012 @09:35AM (#41322895)
        Chose another profession. As someone who was a graduate research assistant, we all knew grant writing was part of the job. You want to keep doing research then you need to apply for grants.
        • Chose another profession. As someone who was a graduate research assistant, we all knew grant writing was part of the job. You want to keep doing research then you need to apply for grants.

          What a horrible, defeatist attitude. I can't stand bugs in software (the vast majority exist because low standards are cheap). So I should chose a different profession?

          As someone who was a graduate research assistant, we all knew grant writing was part of the job. You want to keep doing research then you need to apply

          • The difference is that bugs are specifically instances where programming breaks-down, doesn't work as intended. Grant writing is a part of how research is designed to work. An academic researcher who doesn't like to write grants is more like a programmer that doesn't like to type - it's not the point of the job, but it IS a necessary task to do the job.
            • by Urza9814 (883915)

              Replace 'bug' with 'poorly designed software' then.

              Just because it's part of the design doesn't mean that design is a good one. You won't ever find a better way of doing things if you don't see anything wrong with the current one.

          • I've known since 1978 that "bugs were part of the job" and yet I persist.

            I'd go further than that and say bugs ARE the job, when they stop being reported your product is dead, your business/corporate-department probably died earlier than your product. Note that by "bugs" I mean the software/documents don't do/say whatever the guy spending the money wants them to do/say. As I'm sure you know, a professional works on the basis that giving the customer/boss what they want does not necessarily mean giving them what they ask for, so right there you have your first major "bug" to iron

        • by jerpyro (926071)

          I did choose another profession. Do I still burn for physics research and progress? Yes. Did I consider myself above the petty politics that are involved with getting funding? Yes. Did ANY of my classmates that originally set out to do research in Physics end up doing that? No. A couple of patent lawyers, a couple of quantitative analysts, a few went engineering, and some went in to IT, but with the politicking at the labs and the sensationalized self-promotion that you have to do, it feels like bein

          • by Anonymous Coward

            My advisor is Chinese and has a job in China in addition to his one here. He knows scientists in virtually every country where there is science. Just last week, he said that he believes the (research/science) system in the US is the best in the world and that's why people want to come here. I take his statement very much to heart: I was convinced it was only a matter of time before he left his job in the US entirely and returned to China. What he said makes it very clear he will not do that.

            So, things h

            • I'm not an international researcher but I'd agree that the US is high on the list of "research friendly nations", the science the US actually does is quite a contrast to the popular culture it projects, not to mention infinitely more valuable to the rest of the world.
    • Re: (Score:2, Insightful)

      by jellomizer (103300)

      Business when done properly will have a profitable result.
      Science when done properly will have either a positive or negative result.

      The scientific process for the Facebook Generation...
      I have this crazy idea.
      How to measure if my crazy idea works.
      Lets run tests that measure my crazy idea.
      Does the tests match my expected results?
      If (Not even close) { Your idea was really crazy, try an other one }
      If (close) { Your idea may have some backing but will need to be tweaked }
      if (spot on) { These results may be a flu

    • by cheesecake23 (1110663) on Thursday September 13, 2012 @01:01PM (#41325413)

      OK, it might take more energy to make a solar panel than we'll ever get back from it, but ...

      Will you JUST FUCKING STOP spreading this lie? The energy payback time for photovoltaic modules according to most studies is between 1-4 years [nrel.gov], depending on the material and manufacturing process used. Their technical lifetime is 25 years or more.

      (I know I'm late to the party and hardly anyone will read this, but this is for the three of you who will.)

  • In other news... (Score:5, Insightful)

    by Comboman (895500) on Thursday September 13, 2012 @08:25AM (#41322253)

    So basically, most reporters just regurgitate press releases rather than doing any of that actual journalism stuff. That's not unique to science/medical reporting. It happens in political reporting, business reporting, hell even sports reporting. The bad science reporting is just more obvious because it's easier to debunk.

    • by Anonymous Coward

      The bad science reporting is just more obvious because it's the local hobby.

      All bad reporting is easy to debunk by anyone who takes a little time to get mildly familiar with the subjects. You notice bad science reporting because it is your hobby. I know that I do not notice bad sports reporting because I pay less attention to sports than I do to a youtube video of a lava lamp.

    • by erichill (583191)
      I've seen many Slashdot posts that are copy/pastes of press releases, so what's new. I follow eurekalert.org, and have been really appalled at times at the low quality of the reporting.
  • by Advocatus Diaboli (1627651) on Thursday September 13, 2012 @08:29AM (#41322273)
    I remember writing a post about this phenomena about a year ago. The short version of the story is that over the last 30-40 years, universities and research institutes have increasingly recruited "scientist" with strong tendencies towards showmanship, fraud, lying and bullshitting. This change is largely due to changing nature of incentives as well as methods of evaluation and promotion in these institutions. Peer reviewed research and grants are probably the biggest culprit. Here is the link: http://dissention.wordpress.com/2011/02/06/why-all-publicised-breakthroughs-are-lies/ [wordpress.com]
    • Spoken by a complete imbecile. Fraud will not work in peer reviewed work, let alone grant applications - it's too easy to spot. Universities recruit scientists that have a strong publishing background, i.e., they publish lots of research. Any promotion is done by the University.
      • by gringer (252588)

        Fraud will not work in peer reviewed work, let alone grant applications - it's too easy to spot

        Fraud, at least in the form of bending the truth, is common in both peer reviewed work and in grant applications (and particularly encouraged in the latter). The reality of funding is that scientists need to be increasingly devious in order to make the funding body believe that their work is more important and deserves to be funded over that other group of scientists who are doing similar work (and engaging in similar deception).

        I have had trouble thinking of an appropriate solution to this. Most research t

      • by schrall (1361555)
        You obviously never worked inside an academic research department. Just read RetractionWatch [wordpress.com] to have a daily account of how peer review completely fails to detect fraud and bullshitting most of the time. Plagiarism, image manipulations, data manipulation. Even creation of whole data sets, like in the case of Fujii, a Japanese anesthesiologist who faked data in some 172 papers [wordpress.com]. Universities indeed recruit scientists that publish lots of research. Such incentives push researchers to fake data in order to
    • by fermion (181285)
      Universities need a number of scientist who can build departments and bring in funding. If you went a major research University you can thank these researchers for the availability of professors who can expound on a subject in more than a superficial form. They bring in the funds that pay the professors, graduate students, equipment and even buildings. If they are an evil, they are a necessary and often benign evil. They are either the first or last author on a paper. These professors are high profile,
  • by Jane Q. Public (1010737) on Thursday September 13, 2012 @08:38AM (#41322339)
    If even 1/10 of the hype about "breakthroughs" in solar cell efficiency were actually to be combined and made real in the marketplace, we'd all be charging the utility companies now instead of the other way around.
  • Press Releases? (Score:5, Informative)

    by Anonymous Coward on Thursday September 13, 2012 @08:44AM (#41322387)

    To be fair, university press releases are not written by the scientists who did the research, and in my experience the scientist often doesn't even get the chance to proof and correct them. I myself had my 15 minutes of international fame several years ago (the phone literally didn't stop ringing, interview requests from around the world, etc), all on account of a shockingly inaccurate press release from the university about some interesting but not earth-shattering research that I did.

  • I wonder if the abstracts contain spin 0, 1, 1/2, 3/2 etc. ? If the don't contain spin, is it a new type of physics?
  • by bluefoxlucid (723572) on Thursday September 13, 2012 @09:05AM (#41322569) Journal

    The question is not "are are journals, as a whole, really that bad" ... the question is...

    IS OUR CHILDREN LEARNING YET?!!

  • by Joe Torres (939784) on Thursday September 13, 2012 @09:13AM (#41322639)
    They define spin as: "“spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment)" They also mention: "We considered “spin” as being a focus on statistically significant results ... an interpretation of statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; or any inadequate claim of safety or emphasis of the beneficial effect of the treatment." (emphasis added) I understand the last two, but the first point doesn't make any sense at all. You can't really make conclusions (you can, but scientists will not believe it) about statistically insignificant results. "Spin" can be good in some cases (maybe not at all in clinical research): a research group that studies DNA repair might state, "Our findings on the function of the yeast homolog of SLHDT in dsDNA break recognition may represent a novel target for cancer therapeutics." In this case, the research group doesn't study cancer at all and have no business at all (from their results) mentioning it, but this might convince a cancer researcher to consider reading the paper and possibly looking into doing a quick/cheap experiment targeting SLHDT and testing this claim.
  • by Hognoxious (631665) on Thursday September 13, 2012 @09:27AM (#41322793) Homepage Journal

    Science news cycle [phdcomics.com]

  • by Anonymous Coward

    I guess scientists are supposed to do all of the work now, are they? They have to do the science, write the papers, market it for their funders, write the articles for the news corporations, AND be the fall-guys when something isn't 100% accurate?

    Whew. I'm glad I didn't remain a researcher.

  • Parable (Score:2, Interesting)

    by Anonymous Coward

    A congressman was touring his district when he came upon a bunch of people in a big field with bows and arrows. They were all firing arrows in all different directions.

    "What are you doing?" asked the Congressman.

    "We are shooting arrows," said the archers.

    "But there is nothing to shoot at," said the Congressman. "Those arrows are provided at taxpayer expense! How dare you waste them in this way?"

    "Well," said the archers, "as you can see, we are very skilled archers. We can shoot arrows so far that they go ov

  • by Burb (620144) on Thursday September 13, 2012 @10:55AM (#41323965)

    Hardly "Scientists themselves", is it?

  • One of the worst "bad abstract tricks" is putting your findings as Odds Ratios. What's an Odds Ratio? You probably know that the "probability" of an event is "Event over Total". The probability of rolling a 6 on a standard die is 1/6. The "odds" of an event is "Event to Not Event". The odds of rolling a 6 are not 1:6, they are 1:5 for (or more often said, 5:1 against). So then the odds ratio (OR) of two groups is the ratio of ratios, or the ratio of the odds for one event compared to the odds of anot

    • by tibit (1762298)

      Wait a minute: how the heck OR 20:1 implies probability ratio 3.5:1? The probability of event A is 1/21 = 0.048, probability of B is 20/21 = 0.95. The ratio p(A)/p(B) = 0.051 or ~1/20. It's seem that the OR approaches probability ratio as the OR goes away from 1:1. It's seem to me that OR is farthest away from probability ratio when OR = 1:1. Or else I'm not getting what you mean by OR 20:1 implying 3.5:1 probability ratio. Probability ratio of *what*?

  • If all you look at is the abstract and "conclusions", of course you're going to get an unbalanced view of what the study said. Think of all the other information that is contained in the body of the paper. There's a discussion of the methodological limitations of the study, there's a discussion of all the outcome measures which DIDN'T reach statistical significance, there's a discussion of adverse events, and there's usually also a discussion of where this study fits into our knowledge of the topic as a w

  • This seems like a weak attempt to shift the blame for bad reporters. Their job is to get at the facts and report what is really true. That's what reporters do - at least if they're any good. So scientific press releases contain spin? Shocking! Just like press releases in absolutely every other field. Any reporter who just parrots a press release without understanding it and getting at the truth is a bad reporter.

    Yes, science is complicated. Yes, it takes specialized knowledge to understand. Just lik

  • Check the first sentence assholes.
  • than in scientific articles regarding abortion.

    From the Guttmacher Institute study regarding mortality rates between abortion and childbirth;
    http://www.ncbi.nlm.nih.gov/pubmed/22270271 [nih.gov]
    "METHODS:
    We estimated mortality rates associated with live births and legal induced abortions in the United States in 1998-2005. We used data from the Centers for Disease Control and Prevention's Pregnancy Mortality Surveillance System, birth certificates, and Guttmacher Institute surveys. In addition, we searched for populati

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray

Working...