Major Scientific Journal Publisher Requires Public Access To Data 136
An anonymous reader writes "PLOS — the Public Library of Science — is one of the most prolific publishers of research papers in the world. 'Open access' is one of their mantras, and they've been working to push the academic publishing system into a state where research isn't locked behind paywalls and subscription services. To that end, they've announced a new policy for all of their journals: 'authors must make all data publicly available, without restriction, immediately upon publication of the article.' The data must be available within the article itself, in the supplementary information, or within a stable, public repository. This is good news for replicating experiments, building on past results, and science in general."
Does it really say ALL data? (Score:5, Insightful)
And not just the data that was cherry-picked to support the hypothesis?
Bad news for ecologists--new license needed (Score:5, Insightful)
This is bad news for ecologists and others with long-term data sets. Some of these data sets require decades of time and millions of dollars to produce, and the primary investigators want to use the data they've generated for multiple projects. Current data licensing for PLOS ONE (and--as far as I know-- all others who insist on complete data archiving) means that when you publish your data set, it is out there for anyone to use for free for any purpose that they wish; not just for verification of the paper in question. There are plenty of scientists out there who poach free online data sets and mine them for additional findings.
Requiring full accessibility of data makes many people reticent to publish in such a journal, because it means giving away the data they were planning on using for future publications. A scientist's publication list is linked not only to their job opportunities and their pay grade, but also to the funding that they can get for future grants. And of course those grants are linked to continuing the funding of the long-term project that produced the data in the first place.
What is needed is a new licensing model for published data that says "anyone is free to use these data to replicate the results of the current study, however it CANNOT be used as a basis for new analyses without written consent of the primary investigator of this paper or until [XX] years after publication." Journals would also need to agree that they would not accept any publications based on data that was used without consent.
It seems to me that this arrangement would satisfy the need to get data out into the public domain while respecting the scientists who produced it in the first place.
Re:Bad news for ecologists--new license needed (Score:5, Insightful)
On the other hand, if I don't have your data I can't check your results. If you want to keep your data secret for a decade, you really should plan to not publish anything relying on it for that time either. Release all the papers when you release the data.
Also, who gets to decide when a study is a replication and when it is a new result? Few replication attempts are doing exactly the same thing as the original paper, for good reason. If you want to see if it holds up you want to use different analysis or similar anyway. And "use" data? What if another group produces their own data and compares with yours? Is that "using" the data? What if they compare your published results? Is that using it?
A partial solution, I think, is for a group such as yours to pre-plan the data use already when collecting it. So you decide from start to publish a subset of that data early and publish papers based on that. Then publish another subset for further results and so on.
But what we really need is for data to be fully citeable. A way to publish the data as a reserach result by itself - perhaps the data, together with a paper describing it (but not any analysis). ANyone is free to use the data for their own research, but will of course cite you when they do. A good, serious data set can probably rack up more citations than just about any paper out there. That will give the producers the scientific credit it deserves.
Re:Practicalities (Score:3, Insightful)
Uploading and hosting it in the first place to meet such a requirement would be an extremely difficult & costly endeavor.
Perhaps the compromise is to include a clause that requires the author to permit others to obtain a copy and/or access the data, but only if the receiver of the data pay for the cost to transfer/access the data. This is similar to state open records access laws, where you must pay for things like the cost to make copies of documents. So in the above case, satisfying the "must permit access" clause might be as simple as permitting the researcher to come to the facility and access the data from a terminal and browse or whatever it is they do to explore/analysis the data that results from these experiments, thus no costly copying of data is required.
If that isn't agreeable or feasible for the author/institution, then perhaps such research would simply be more appropriately published in a different journal that isn't as focused on openness and verifyability.
Re:Practicalities (Score:2, Insightful)
I think there's an arguable line to draw between "the entire body of data available", and the statistical sampling data that your typical paper is based on, or the specific data about a newly discovered phenomenon, for example.
Exactly where that line is, I don't claim to know. But it behooves us to be reasonable, and not draw UNreasonable fixed lines in the sand.
My personal opinion is: petabytes or not, if the research is publicly funded then the data belongs to the public, and must be made available in some fashion. That's a somewhat different subject than publishing a paper, but it's a related idea.
Re:Practicalities (Score:4, Insightful)
The point seems to be missed by a lot of people. RAW DATA IS USELESS. You can make available a thousand traces of voltage vs. time on your detector pins, but that is of no value whatsoever to anyone. The interpretation of these depends on the exact parameters describing the experimental equipment and procedure. How much information would someone require to replicate CERN from scratch?
Some (maybe most, but not all) published research results can be thought of as a layering of interpretations. Something like detector output is converted to light intensity which is converted to frequency spectra and the integrated amplitudes of the peaks are calculated and are fit to a model and the parameters fit giving you a result that the amplitude of a certain emission scales with temperature squared. Which of these layers is of any value to anyone? Should the sequence of 2-byte values that comes out of the digitizer be made public?
It is not possible to make a general statement about which layer of interpretation is the right one to be made public. Higher levels, closer to the final results, are more likely to be reusable by other researchers. However, higher levels of interpretation provide the least information for someone attempting to confirm that the total analysis is valid.