New Scientific Journal To Publish "Discrete Observations Rather Than Complete Stories" (sciencemag.org) 45
sciencehabit writes: Is the pressure to publish tempting scientists to improperly tweak their findings in order to create more cohesive stories? If researchers could report just the one finding they felt comfortable with, perhaps there would be no need to be dishonest. That thinking has spurred the creation of a new scientific journal, Matters. The open-access publication aims to boost integrity and speed the communication of science by allowing researchers to publish discrete observations rather than complete stories. "Observations, not stories, are the pillars of good science," the journal's editors write on Matters' website. "Today's journals however, favor story-telling over observations, and congruency over complexity Moreover, incentives associated with publishing in high-impact journals lead to loss of scientifically and ethically sound observations that do not fit the storyline, and in some unfortunate cases also to fraudulence."
Will this affect perception of research? (Score:5, Insightful)
Re: (Score:2)
That's what current is happening. Researchers generally are allowed a certain degree of speculation in the conclusions of their papers, speculation that goes far beyond what the data actually shows. That's what journalists often "run with" and publish as "peer reviewed fact".
http://www [phdcomics.com]
Re: (Score:2)
Pressure (Score:1)
This would be great if it actually reduced the pressure on scientists somehow. When hiring decisions are based on publications in the pressure inducing top journals, this isn't going to help anyone who wants to be hired.
Not entirely true... (Score:1)
As they say in their website ``Once a group of authors has accumulated a sufficient body of linked, peer reviewed publications at Matters (reaching a minimal network size), we encourage them to submit a narrative integration of their observations to Mattersconsilience, the third journal of Sciencematters.'' [1].
Plus, if there is no story behind... how do we, scientists and people, know if an observation is important or not? Fact: ''I added nutrient X, the expression level of gene Y decreased''. Good, but...
Re: (Score:2)
I would think having the results published before the narrative would make it much harder to nudge the results to match the narrative. There are dozens of techniques to do this when you can publish only data that supports the narrative - p-hacking (rights not significant, add more until they are!), elder Hispanic women (drug not showing results, chop your demographics up into smaller and smaller pieces until by chance one shows results) and so on.
Exactly. I think this data will be particularly valuable in helping people disprove conclusions drawn from "massaged" datasets -- it'll just be a matter of picking out enough counterexamples to undermine the original study's credibility. I'm inordinately excited about this.
Re: (Score:1)
honesty... (Score:4, Interesting)
Scientist speaking here. One finding in no finding. It's luck or mistake. If there's just one "finding" you're "comfortable with", it's not publication you should think about, it's changing what you do and how you do it.
"incentives associated with publishing in high-impact journals lead to loss of scientifically and ethically sound observations"
Bullcrap. And "that's all I have to say about that"
"Today's journals [...] favor [...] congruency over complexity"
Uhmm, sorry, what now? Why would one exclude the other? On the other hand, would they want journals that prefer complexity over congruency? Now, that would be a doozy.
"There are few, if any, places to publish one-off experiments that arenâ(TM)t part of a bigger story but might still be informative. So unless the researcher âoeinvests in a series of additional experiments to package the failed reproduction, that result will languish in laboratory notebooks,â"
Well, I don't think I could be convinced we should value un-reproducible one-off experimental "results". Ever. However, there's nothing stopping you people publish such "results", you know, there's the Internet and whatnot.
"a researcher who is able to show, with proper controls and statistics, that an extract from eucalyptus bark relieves pain under certain conditions. âoeIn todayâ(TM)s world, you canâ(TM)t publish that in a good journal,â Rajendran says. âoeYou would need to know which molecule it is"
Hell, good that it is so. There are still some people out there who actually like to know what the hell it is they put into their bodies and how it works (and that it actually works).
Re: (Score:2, Insightful)
Whether it's one finding supporting your theory or one hundred findings really makes no statistical difference when your approach is to keep doing experiments until you get the results you want. And, sadly, that is what academics generally do: they vary experimental conditions, parameters, s
Re: (Score:2)
Thankfully there are academics and then there are academics, and I try to believe don't all of them "generally do" that - but, I'm not denying this can be a field-dependent way (e.g. medicine) of doing things. What I mean is that if you are looking for a specific outcome (let's say curing lung cancer), then I'm not really against trying-until-succeeding
Re: (Score:2)
You haven't thought this through. Assume people think that some method cures some disease. Over time, there are 500 studies testing for efficacy of that method in curing the disease. Even if the method is completely ineffective, at the 5% significance level, they will get 25 studies showing statistically significant effects;
Re: (Score:2)
I have no idea how "the pressures of capitalism" entered your arguments. Capitalists are probably the most interested in accurate scientific results because inaccurate results cos
And what's the impact factor? (Score:5, Insightful)
Re:And what's the impact factor? (Score:4, Insightful)
Impact factor is determined from the number of citations to a journal's articles, so a journal that hasn't published anything has no definable impact factor. It's not zero or low, it's undefined and referencing it is meaningless.
I would expect that a journal that let people publish observations without requiring an accompanying narrative could acquire a decent number of citations, even if the overall impact factor is low. Demonstrating impact through the proxy of a journal's impact factor is just lazy accounting by management types. It's easy enough to count actual citations to a publication to determine an author's impact (even if it can be gamed). Most of the papers that I actually read and cite are not in the ultra-high-impact journals. Most of a fantastically high impact researcher's publications are going to be in a field's bread and butter journal.
Since when is a scientific paper journolism ? (Score:2)
Ideally a paper has enough information that you can either recreate, apply and or expand upon the work. Saying here's some observations have at it with the curve fitting tools would be just idiocy. Don't even know how you would publish for any field that generates large amounts of data.
Then you have fluke events such as the apparent one time observation of a magnetic monopole. It's meaningless without context.
Re: (Score:2)
Re: (Score:2)
I can appreciate that, It would seem that the appropriate place for it would be in the same journal as the paper was published. If you think about the mechanics it makes a good argument for open/public journals rather than paywalled journals.
Re: (Score:2)
The problem starts in school (Score:3)
IMHO, the problem starts in school. As an example: you do a chemistry experiment, get some weird results, which aren't the ones you should have been getting, now you have two options, which are either to write up and conclude what you observed or bullshit and write up what was expected, as if it had worked. The first risks getting you low marks, while the second top marks. What do you think most people under pressure to perform would do?
The way I would like to see things done: you write things up as you observed, but add an in the conclusion an analysis of why you think your results varied from expected results. For example, did you put in too much of substance A or substance B, and why would that impacted things. It may put extra work on the teachers, but if we want students who can think and not cover up their tracks, then this may be worth it. A healthy workplace depends on this.
Re: (Score:3)
Short Attention Span (Score:2)
And this is what happens when people raised without the ability to concentrate for more than a few minutes at a time come into positions of power and authority.
I read the article. Kudos to the editors to try and further speed the process of publication (and for promising to pay editors and peer reviewers), but the basic premise is flawed. The only benefit to a publication like Matters will be to increase the publication count of its authors. Individual observations, without the scholarly research to prov
Re: (Score:2)
Thanks for posting this, Anon. Did not know about this little thought experiment. Googled it and was not disappointed.
Re: (Score:1)
For those who avoid Googling, this thought experiment [wikipedia.org] is by Galileo Galilei [wikipedia.org].
Something not entirely different: in Otto Frisch [wikipedia.org]'s delightful memoir "What Little I Remember" [cambridge.org] he relates a story about Niels Bohr [wikipedia.org] and him, which can also be read here [aip.org] (search on the page for "thought experiments" or - even better - just read the whole transcript); in "The Making of the Atomic Bomb" [wikipedia.org] by Richard Rhodes this is told as follows:
He [Bohr] was traveling through Germany to determine who needed help. [This was in the 1
Incoherent observations are not science (Score:2)
Science Twitter (Score:1)
There is no "view from nowhere" (Score:4, Interesting)
The most prominent motivation for this proposal lies in prominent failures and retractions in medical and psychological research. As a recent meta-study showed, most psychological studies are not reproducible (probably because their pool of subjects consisted of university students, a very weird bunch of people ;-). Also, many drug studies are influenced by pharmaceutical industry funding.
But the article's proposal won't work. It assumes, at some level, that there are fundamental facts, and that it's possible to discover these facts, without a theory. That's why they are proposing publishing discrete observations, without any "story" that observations fit into. But philosophers have thought about this already. Kant's theory of categories explains that you can't perceive facts "raw", but always see the world through some mental model you carry with you, wether you know it or not. So you always have a model of the world, which colours your perceptions.
I would argue, further, that thinking itself is impossible without a model. You need a structure to hang your ideas onto. You can't stand fully outside your own biases and mental preconceptions, and see things are they "really are". Your model may change over time, or someone else's model may become accepted as better, and observations will then fit into a different "story". That's what a scientific revolution is: a change of model to explain the same phenomena.
Facts need to published within the context of a "story". There's no way around this. At most, we can try to be aware of the story we are caught inside of.
Not so new (Score:1)
- An Account of the Improvement of Optick Glasses at Rome
- Observations
- Motions of the late
"If only we could make science more boring..." (Score:2)