Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
Education Science

New Scientific Journal To Publish "Discrete Observations Rather Than Complete Stories" (sciencemag.org) 45

sciencehabit writes: Is the pressure to publish tempting scientists to improperly tweak their findings in order to create more cohesive stories? If researchers could report just the one finding they felt comfortable with, perhaps there would be no need to be dishonest. That thinking has spurred the creation of a new scientific journal, Matters. The open-access publication aims to boost integrity and speed the communication of science by allowing researchers to publish discrete observations rather than complete stories. "Observations, not stories, are the pillars of good science," the journal's editors write on Matters' website. "Today's journals however, favor story-telling over observations, and congruency over complexity Moreover, incentives associated with publishing in high-impact journals lead to loss of scientifically and ethically sound observations that do not fit the storyline, and in some unfortunate cases also to fraudulence."
This discussion has been archived. No new comments can be posted.

New Scientific Journal To Publish "Discrete Observations Rather Than Complete Stories"

Comments Filter:
  • by Transist ( 997529 ) on Thursday December 03, 2015 @06:46AM (#51048027)
    I think a concerning matter is that journalists (not science journals necessarily) also destroy the credibility of science by taking these observations ("according to a recent study...") and running with the "results" as news. A recent one that comes to mind is that researchers noticed that the diabetes medication Metformin seemed to have effects on life expectancy. Of course news outlets are currently running with the story that we might have found the miracle anti-aging pill. You can turn up a bunch of articles by googling the drug. It's usually later found that the claims are hugely inflated by the media and further research really goes nowhere. I suspect that the fatigue of constantly hearing these kind of false-hope and misleading reporting articles might hurt the image of legitimate scientific research. I wonder if this will have an effect on this issue. I suspect researchers may be complicit in providing journalists with these stories that they love to run with. Keeping that kind of speculation to a minimum might help.
    • I think a concerning matter is that journalists (not science journals necessarily) also destroy the credibility of science by taking these observations ("according to a recent study...") and running with the "results" as news.

      That's what current is happening. Researchers generally are allowed a certain degree of speculation in the conclusions of their papers, speculation that goes far beyond what the data actually shows. That's what journalists often "run with" and publish as "peer reviewed fact".

      http://www [phdcomics.com]

    • The journals and authors cannot be absolved of blame in this. When I was at university, I was taught that if I wanted to speculate, I should include a section saying what I would do if I had more time, what questions my results/conclusions raised, and what hypotheses I would look to prove. But when I look at many published papers, that best practice from my undergrad days is nowhere to be seen. Speculations are presented as conclusions. They are presented without clear discussion of how the hypothesis could
  • by Anonymous Coward

    This would be great if it actually reduced the pressure on scientists somehow. When hiring decisions are based on publications in the pressure inducing top journals, this isn't going to help anyone who wants to be hired.

  • by Anonymous Coward

    As they say in their website ``Once a group of authors has accumulated a sufficient body of linked, peer reviewed publications at Matters (reaching a minimal network size), we encourage them to submit a narrative integration of their observations to Mattersconsilience, the third journal of Sciencematters.'' [1].

    Plus, if there is no story behind... how do we, scientists and people, know if an observation is important or not? Fact: ''I added nutrient X, the expression level of gene Y decreased''. Good, but...

  • honesty... (Score:4, Interesting)

    by l3v1 ( 787564 ) on Thursday December 03, 2015 @08:51AM (#51048387)
    "If researchers could report just the one finding they felt comfortable with, perhaps there would be no need to be dishonest."

    Scientist speaking here. One finding in no finding. It's luck or mistake. If there's just one "finding" you're "comfortable with", it's not publication you should think about, it's changing what you do and how you do it.

    "incentives associated with publishing in high-impact journals lead to loss of scientifically and ethically sound observations"

    Bullcrap. And "that's all I have to say about that"

    "Today's journals [...] favor [...] congruency over complexity"

    Uhmm, sorry, what now? Why would one exclude the other? On the other hand, would they want journals that prefer complexity over congruency? Now, that would be a doozy.

    "There are few, if any, places to publish one-off experiments that arenâ(TM)t part of a bigger story but might still be informative. So unless the researcher âoeinvests in a series of additional experiments to package the failed reproduction, that result will languish in laboratory notebooks,â"

    Well, I don't think I could be convinced we should value un-reproducible one-off experimental "results". Ever. However, there's nothing stopping you people publish such "results", you know, there's the Internet and whatnot.

    "a researcher who is able to show, with proper controls and statistics, that an extract from eucalyptus bark relieves pain under certain conditions. âoeIn todayâ(TM)s world, you canâ(TM)t publish that in a good journal,â Rajendran says. âoeYou would need to know which molecule it is"

    Hell, good that it is so. There are still some people out there who actually like to know what the hell it is they put into their bodies and how it works (and that it actually works).
    • Re: (Score:2, Insightful)

      Scientist speaking here. One finding in no finding. It's luck or mistake. If there's just one "finding" you're "comfortable with", it's not publication you should think about, it's changing what you do and how you do it.

      Whether it's one finding supporting your theory or one hundred findings really makes no statistical difference when your approach is to keep doing experiments until you get the results you want. And, sadly, that is what academics generally do: they vary experimental conditions, parameters, s

      • by l3v1 ( 787564 )
        "when your approach is to keep doing experiments until you get the results you want. And, sadly, that is what academics generally do"

        Thankfully there are academics and then there are academics, and I try to believe don't all of them "generally do" that - but, I'm not denying this can be a field-dependent way (e.g. medicine) of doing things. What I mean is that if you are looking for a specific outcome (let's say curing lung cancer), then I'm not really against trying-until-succeeding :) even if it's not 1
        • What I mean is that if you are looking for a specific outcome (let's say curing lung cancer), then I'm not really against trying-until-succeeding :) even if it's not 100% reproducible :)

          You haven't thought this through. Assume people think that some method cures some disease. Over time, there are 500 studies testing for efficacy of that method in curing the disease. Even if the method is completely ineffective, at the 5% significance level, they will get 25 studies showing statistically significant effects;

  • by TheRaven64 ( 641858 ) on Thursday December 03, 2015 @09:04AM (#51048451) Journal
    (Speaking from the perspective of a UK academic, may vary between countries) There is no pressure to publish, as an abstraction. There is pressure to demonstrate impact. The easiest way to demonstrate impact is to publish in top-tier publications. Publishing in a new journal or conference is always a big gamble - if the journal does well later then you may retroactively benefit from a later assessment of its impact, but typically it's in the noise of all of the spammy journals.
    • by chihowa ( 366380 ) on Thursday December 03, 2015 @11:45AM (#51049697)

      Impact factor is determined from the number of citations to a journal's articles, so a journal that hasn't published anything has no definable impact factor. It's not zero or low, it's undefined and referencing it is meaningless.

      I would expect that a journal that let people publish observations without requiring an accompanying narrative could acquire a decent number of citations, even if the overall impact factor is low. Demonstrating impact through the proxy of a journal's impact factor is just lazy accounting by management types. It's easy enough to count actual citations to a publication to determine an author's impact (even if it can be gamed). Most of the papers that I actually read and cite are not in the ultra-high-impact journals. Most of a fantastically high impact researcher's publications are going to be in a field's bread and butter journal.

  • Ideally a paper has enough information that you can either recreate, apply and or expand upon the work. Saying here's some observations have at it with the curve fitting tools would be just idiocy. Don't even know how you would publish for any field that generates large amounts of data.

    Then you have fluke events such as the apparent one time observation of a magnetic monopole. It's meaningless without context.

    • As I've commented further up, I think the real value here is counterexamples: if you have a paper that's built on selective evidence, you don't need to prove an alternative theory, but rather just find sufficient counterexamples to demonstrate that the paper is unreliable.
      • I can appreciate that, It would seem that the appropriate place for it would be in the same journal as the paper was published. If you think about the mechanics it makes a good argument for open/public journals rather than paywalled journals.

        • I think the next step after this would be an open index where people can just catalogue papers and individual observations and how they support/refute one another. And goodbye selective citation.
  • by Midnight Thunder ( 17205 ) on Thursday December 03, 2015 @09:40AM (#51048663) Homepage Journal

    IMHO, the problem starts in school. As an example: you do a chemistry experiment, get some weird results, which aren't the ones you should have been getting, now you have two options, which are either to write up and conclude what you observed or bullshit and write up what was expected, as if it had worked. The first risks getting you low marks, while the second top marks. What do you think most people under pressure to perform would do?

    The way I would like to see things done: you write things up as you observed, but add an in the conclusion an analysis of why you think your results varied from expected results. For example, did you put in too much of substance A or substance B, and why would that impacted things. It may put extra work on the teachers, but if we want students who can think and not cover up their tracks, then this may be worth it. A healthy workplace depends on this.

    • Canadian here, I thought that's how it's always done, at least that my experience in Physics. Absolutely NO emphasis was getting the right answer (except for your calculations, even wrong data can be calculated to arrive at some outrageous number), instead marks are on proper note keeping, data COLLECTION and calculation, and of course the conclusion and what-not. That's where you talk about what may have gone wrong and you do not even talk about human error, that can always happen (and the experiment sta
  • And this is what happens when people raised without the ability to concentrate for more than a few minutes at a time come into positions of power and authority.

    I read the article. Kudos to the editors to try and further speed the process of publication (and for promising to pay editors and peer reviewers), but the basic premise is flawed. The only benefit to a publication like Matters will be to increase the publication count of its authors. Individual observations, without the scholarly research to prov

  • This is crazy. If you can't make sense of your observations and connect them to our understanding, then they are unlikely to be useful. Existing journals will publish observations that are not explained if they are accompanied by a careful explanation of what is and is not understood about the problem. We definitely do not need more publication of observations disconnected from understanding.
  • Great, so it's the headline --> comment section 2ms attention span for science. I'm sure this won't lead to isolated observations being turned into headlines and massively misconstrued as ammunition for people's agendas. I look forward to when this journal replaces Wikipedia as the most laughable thing one could possibly supply as a source to back up their lunatic ravings on /.
  • by AlejoHausner ( 1047558 ) on Thursday December 03, 2015 @01:41PM (#51050847) Homepage

    The most prominent motivation for this proposal lies in prominent failures and retractions in medical and psychological research. As a recent meta-study showed, most psychological studies are not reproducible (probably because their pool of subjects consisted of university students, a very weird bunch of people ;-). Also, many drug studies are influenced by pharmaceutical industry funding.

    But the article's proposal won't work. It assumes, at some level, that there are fundamental facts, and that it's possible to discover these facts, without a theory. That's why they are proposing publishing discrete observations, without any "story" that observations fit into. But philosophers have thought about this already. Kant's theory of categories explains that you can't perceive facts "raw", but always see the world through some mental model you carry with you, wether you know it or not. So you always have a model of the world, which colours your perceptions.

    I would argue, further, that thinking itself is impossible without a model. You need a structure to hang your ideas onto. You can't stand fully outside your own biases and mental preconceptions, and see things are they "really are". Your model may change over time, or someone else's model may become accepted as better, and observations will then fit into a different "story". That's what a scientific revolution is: a change of model to explain the same phenomena.

    Facts need to published within the context of a "story". There's no way around this. At most, we can try to be aware of the story we are caught inside of.

  • While claiming to be a new approach, a glance at the early Philosophical Transactions of the Royal Society indicates that this idea is very old. The modern journal's move to "cohesive stories" was in many ways a reaction to the initial idea of listing observations and discoveries. Hence, the table of contents of the first issue (March 6, 1667) includes: [jstor.org]

    - An Account of the Improvement of Optick Glasses at Rome
    - Observations ... of a Spot on one of the Belts of the Planet Jupiter
    - Motions of the late
  • "2015/12/03 20:57:89.523 - Test rat 1591 consumed a pellet from dispenser A."

Some of my readers ask me what a "Serial Port" is. The answer is: I don't know. Is it some kind of wine you have with breakfast?

Working...