Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses The Almighty Buck Science

The Real Reason Journal Articles Should Be Free 193

Bennett Haselton writes "The U.S. government recently announced that academic papers on federally-funded research should become freely available online within one year of publication in a journal. But the real question is why academics don't simply publish most papers freely anyway. If the problem is that traditional journals have a monopoly on the kind of prestige that can only be conferred by having your paper appear in their hallowed pages, that monopoly can easily be broken, because there's no reason why open-access journals can't confer the same imprimatur of quality." Read on for the rest of Bennett's thoughts on the great free-access debate.

Around the time of the tragic suicide of Aaron Swartz, who lobbied tirelessly for free access to academic articles (in his sometimes grey-hat manner, which ultimately got him in trouble), I admitted to some friends that I didn't understand how this became a problem. Why aren't all journal articles free, all the time?

I don't mean that I didn't know why the journal publishers charged exorbitant fees for their subscriptions. If academic researchers have to have access to journal articles in order to do their jobs, then you can expect the journals to gouge academic libraries on the prices. What I didn't understand was: Why do academics even publish in journals that demand exclusive publishing rights for their work, and then charge readers huge fees to read it?

Well actually, we know the answer to that too: academics want the prestige of publishing in big-name journals that have established reputations, and as a result, those well-known journals are in a position to dictate the terms of the contract. A professor might genuinely want to publish their paper in a journal where it can be read for free by all, but they can hardly be blamed for thinking of their own career path first.

Here's the question I really wanted answered: If "prestige" only exists in the minds of other academics within a field, then why don't the academics within a given field just agree to confer "prestige" on papers published in open-access journals, if they can see for themselves that the quality is equivalent to what would be published in the old-guard journals that charge an arm and a leg? And then make hiring, promotion, and tenure decisions accordingly?

I don't mean that the papers published in an open-access journal would bypass the peer-review process, and that everyone in the field would have to judge the papers for themselves without any prior certification of their quality. One of the points that Peter Suber makes repeatedly in his book Open Access is that open access is not about skipping peer review and dumping papers directly onto the web. Rather, the process would work similarly to peer review for a traditional journal:

  1. Author submits a paper to journal XYZ.

  2. Journal XYZ selects one or more peer reviewers from among their list of people they consider qualified to review the paper. The peer reviewers send back their usual suggestions and some consensus is reached as to whether or not to publish.

  3. If Journal XYZ publishes the paper, then they have certified that the paper passed the quality controls in step #2, and the author can now legitimately claim that they had a paper published in Journal XYZ.

  4. If people in the field know that Journal XYZ is not skimping on the quality controls in step #2 — that Journal XYZ is sending the papers to the same academics who would do peer review for one of the old-guard journals, and who are holding the papers to the same standard — then they should respect the paper just as much as if it were published in a traditional journal. If a person has never heard of Journal XYZ, then it should only take a minute to explain to them how it works (and crucially, that Journal XYZ is just as strict about quality as the old-guard journals that everybody has heard of).

Each step in this process should cost the journal virtually nothing. The "hard cost," the part that consumes the time of people with unique skills, is the peer review step, but peer reviewers are usually paid by universities and consider peer review for academic journals to be part of their job description. At a minimum, all the editors really have to do is maintain the list of people they consider qualified to do the peer review, and send the submitted papers off to them.

Moreover, the entire process should be fast. Again, the "hard cost" in time is the peer review, but there's no reason that the delays between submission and publication should be in the range of months or years.

(I'm assuming that the article authors would want their writings to be widely read, or at least would not be opposed to it. That may not be the case if, for example, the authors were commissioned by a pharmaceutical company for a study that cast their drug in a favorable light, but the authors realize that their research methods contained errors and want to minimize the number of eyes on their paper, to reduce the chances of their chicanery being caught. Ben Goldacre's Bad Pharma documents these types of problems very thoroughly, but I'm sidestepping that issue for now.)

So, with that in mind as the ideal, I asked my friends, including many current and former academics, why this essentially wasn't the model that was used. Several mentioned the Public Library of Science, which publishes all articles in its journals under a Creative Commons Attribution License (free for anyone to read and reproduce in full, as long as the original author is cited), and finances its operations through publication fees. These fees are in the $2,000-$3,000 range, heavily discounted for low-income countries and authors, and in any case most academic authors pay the fees out of their research grants and not out of their own pockets. That sounded much better than the traditional model, I thought, but I still didn't understand why the costs weren't even closer to zero. Another friend pointed out that PLOS costs cover the expenses for many of their other activities — which are all noble goals, to be sure, but at the same time, why isn't anybody operating a more bare-bones model which minimizes all expenses, and charges almost nothing for publication or subscription?

This, it turns out, appears to be the approach of the PeerJ project, which aims to let authors pay a one-time fee of $99 at article submission time for the right to publish one article per year — or, if you prefer to pay only if your article is accepted for publication, you can pay $129 "on acceptance" (explained here). And the author of the Techdirt piece mentions that he submitted a paper which was published in the inaugural edition of one of PeerJ's journals, 10 weeks after the submission date. This is cheap and fast enough that I'd call it a validation of the theoretical model which predicts the whole process should be able to be done for almost no cost in almost no time. In other words, I think PeerJ will succeed, but even if it does fail, it will only be because of some anomalous business snafu, not because the hard costs of the service they're providing are greater than the dirt-cheap price they're charging for it. If for any reason PeerJ doesn't happen to get it right the first time, they or some other company should keep trying until someone makes it work.

The basic algorithm at work here — taking a piece of content, submitting it to one or more suitably qualified reviewers, and then certifying the content based on the feedback of the reviewers — is something I've advocated in many contexts over the years, for many different types of problems. In one article I argued that we could make success in the music industry into much more of a meritocracy, with far less arbitrariness in determining who succeeds and fails, if a suitably popular site like Pandora simply took new submissions from artists, had the content "rated" by a random sample of listeners interested in that type of music, and if enough of them liked it, push the content out to all of the fans of that genre. In "Crowdsourcing the Censors" I suggested that Facebook's complaint review process should use the same principle: If a given page received enough complaints, have the page contents reviewed by a random subset of Facebook users who had signed up to be "abusive content" reviewers, and then only flag the page for removal if a high enough percentage of those users voted that the page had indeed violated Facebook's guidelines. This year I argued that "We The People", the White House's online petition-drive-organizing website, should rate ideas based on what a random subset of users think of each idea, rather than allowing users to organize mobs of their friends and followers to vote their own ideas to the top of the pile (which, in case you missed it, is how 4chan gave us this). Or, if you think the general public is not qualified to rate ideas according to how they should be prioritized by the White House (and I'd be inclined to agree), you could have the ideas rated by a random subset of, say, the nation's economics professors.

Of course, I haven't heard of any plans to implement this algorithm in any of those contexts. Not that I expected the key power players to be reading my articles, but it's a little surprising that none of them ever came up with this idea independently, either. (To this day, the only website I'm aware of that ever implemented random-sample voting correctly, was HotOrNot.com, where users could rate members' pictures by attractiveness — but each picture's rating was determined by showing it to a random subset of the site's visitors. That system is gone, since the site has made itself over into a date-finding service.)

But academia in general, and science specifically, is different from other arenas in a number of key ways which could help this algorithm succeed:

  • Academia, uniquely, is comprised of many professionals whose love of knowledge and intellectual inquiry, is greater than their desire for money. That's not to say that I don't think the same algorithm could work just as well in a business like the music industry, where most of the stakeholders are in it for the money. But even if Pandora did successfully implement the algorithm, it would meet a lot of resistance from entrenched interests in the music industry, who make their money by finding and promoting and managing talent and would not be happy about a new system that threatened to make them irrelevant. In academia, by contrast, it's quite plausible that even the "entrenched interests" — the people who had become superstars under the old system — would see the new system's great potential for disseminating free knowledge, and would welcome it even if it gave scrappy new upstart academics a chance to dethrone them. Not everybody in academia loves knowledge more than they love their own prestige, but I know more people like that in academia than anywhere else.

  • In academia, even among people who do care primarily about their own prestige, many of them have tenure and guaranteed job security, a situation that does not exist in most other industries. This gives them the freedom to experiment with new models, such as submitting papers to upstart PeerJ journals. But more importantly for our purposes, it means they can announce that in their department's hiring and promotion decisions, they will count PeerJ-published papers as legitimate professional accomplishments, for the benefit of non-tenured faculty members who do have to worry about their resume.

  • Academics, particularly in maths and sciences, are more prone to the kind of thinking that would lead a person naturally in the direction of the kind of system that PeerJ embodies. First, think of a theoretical model (like the kind I described near the beginning of the article). This model predicts that, ideally, it should be possible to publish papers at very low cost with quick turnaround times, without sacrificing peer-review quality assurance. Now, try to approximate that model as closely as possible in the real world. (In most other industries that I've worked in, there's much more inertia around the existing way of doing things, and far less willingness to entertain any discussion about whether a theoretical model can show how we could accomplish the same thing with vastly less overhead.)

And that, in the end, is the real reason journal articles should be free. Not because the U.S. government is making it a condition for taxpayer-funded research, although that is a welcome development. But because there's no part of the process that should cost very much to begin with, if article authors and peer reviewers are already being paid by their employers. The last piece of the puzzle is that enough academics and faculty departments have to agree to confer "prestige" on articles published in open-access journals, equivalent to the level of prestige that they would accord for an article published in a traditional journal of the same quality. If they won't do that, then the old-guard journals will maintain their monopoly on conferring "prestige", and don't be surprised if journal prices keep growing to the point where even Harvard can't pay for them.

This discussion has been archived. No new comments can be posted.

The Real Reason Journal Articles Should Be Free

Comments Filter:
  • The National Institutes of Health (NIH) has required this for some time now. Interestingly enough once NIH made this mandatory, the for-profit journals found ways to comply on a per-article basis so that academics would still publish with them.

    The important thing to consider about all this, though, is that the for-profit journals still get more readers than the open access ones. I am one of many who wish that this was not the case, but it simply is. Hence if you want your work to be read by the largest number of possible readers, and become incorporated into your field of work, you want to get it into the larger, more prestigious -and more expensive - journals.

    That said, some of the open access journals - PLoS ONE being a great example - are catching up quickly and drawing lots of readers and with them lots of citations.

    The only problem left is that the open access journals cost about as much for authors to publish as do the for-profit journals. I had a paper in PLoS ONE recently and we paid somewhere around $1,400 to publish. By comparison the journal a lot of our "higher impact" work goes to costs around $1,500 and even Nature is in the same ballpark (not that we publish in Nature). So if the open access journals with their lower impact scores can't attract authors with lower publishing costs they need to do it with promises of good exposure.
  • by Anonymous Coward on Friday March 01, 2013 @01:59PM (#43047063)

    It ensures data isn't faked or fraudulent.

    NO. That is absolutely not the point of peer review.

    Peer review is designed to ensure that there are no methodological or logical flaws in the project. Essentially, peer review assumes that the data is accurate, but makes sure that the conclusions derived from the data are reasonable. It will also check to make sure that the methodology used to collect the data is sound.

    There are almost zero checks for faked or fraudulent data. Faked data will only be found immediately if the faker did something very stupid.

    Where science will find faking is in followup studies. Eventually, people will try and reproduce the same result, or at least a similar result. If they fail to reproduce the claimed data, then there is an issue and the fake might be eventually found. But this takes quite a bit of time, effort, and money, and therefore isn't part of the peer review process.

  • Missing: hierarchy (Score:5, Informative)

    by spasm ( 79260 ) on Friday March 01, 2013 @02:21PM (#43047293) Homepage

    The detail you're missing is that academic journals have a hierarchy. The top ranked journals in a field get far far more submissions than mid or lower ranked journals. So even though the peer-review process might be identical (to the point where in small fields such as mine I regularly review for both top ranked and mid ranked journals, as do all my colleagues - ie its even the same pool of reviewers), the higher ranked journals will end up publishing the more groundbreaking research, because they cream off the best of their submissions (and once your article gets rejected at the top journal you resubmit it to a lesser journal). As a reader, you use the contents pages of the high ranking journals to work out what's currently considered cutting edge in your field, and the mid-ranking journals to see all the necessary 'filling in the gaps' research. As an author, you want to be published in the top ranking journal because a) it's more likely to be seen and read by colleagues in your field; b) your colleagues pay more attention to your work generally if you're consistently publishing in top ranking journals; and c) tenure/promotions committees give more 'weight' to articles published in higher ranking journals. I've literally seen publication requirements for tenure at some US universities that look like "A minimum of 3 articles published in the following list of top-raked journals or a minimum of 5 articles published in the following list of lesser-ranked journals".

    So in short, even though I (like everyone I know) would prefer to publish in open access journals, simply on ethical grounds, most of us still submit a lot of our work (and particularly our best work) to journals which have been around for longer than the open access movement just because they remain at the top of the hierarchy. The good news is this isn't a static situation - journals can and do move up and down the hierarchy, and some of the open access journals (including some of the PLOS journals you mentioned) are rising quite rapidly in the impact rankings. The other major point is a lot of the key journals are actually the property of various societies or academic organizations which simply contract with for-profit publication companies to handle all the messy bits (eg 'Addiction', the leading journal in my field, is the journal of the Society for the Study of Addiction, not simply a journal owned by Wiley who publishes it). A lot of these contracts are long term (25 years or more) but as they expire, you might see a lot of key journals becoming open access simply because the sponsoring organization decides to switch to an open access model simply because they now can, and have a philosophic interest in seeing their journal be as accessible as possible.

  • by caesar-auf-nihil ( 513828 ) on Friday March 01, 2013 @02:24PM (#43047329)

    As someone who is on the editorial board for 3 journals, reviews about 3-4 papers a week, and is not a faculty member (I'm a contract researcher) with over 50 peer reviewed publications to his name, let me tell you about some costs you're missing in your assumptions.
    1) You're correct that the peer review process is provided free by scientists like myself, and it is our duty to provide this review. However, I'll spend 1-3 hours on a paper reviewing for content. What I'm not doing is copyediting. You're assuming that the papers submitted are in good shape when they arrive, and I would say out of the hundreds of papers I've reviewed over the years, only rarely have I found one polished and ready to go. Almost all of them have formatting errors, typos, and grammatical errors. The worst ones sometimes are those where the author is not a native English speaker. They could have absolutely fantastic results, but the spelling and grammar is so bad you can't exactly figure out if they've discovered something novel or if their results are totally bogus. You need to pay for someone (or multiple someones) to clean up, copyedit, and format each paper.
    2) Electronic review system. I'm not seeing how your model pays for this. Someone has to pay to host, maintain, and power those servers - they don't set up themselves. That is a cost that can be divided per paper submitted to the journal - but then onto #3.
    3) Many of the open-access journals make the author pay to submit the article upon acceptance to the journal, thus paying for items #1 and #2, but with budgets being cut you're asking the author to sacrifice even more of his small budget (which in my case pays some of my salary). So who pays in the end is always going to be a sticking point.
    4) Not all peer review is fast. You're assuming all scientists are ideal and get right on the paper as soon as they get it. I've had papers that came back in a week, and others that took 9 months (reviewer #3 sat on it and the editor couldn't get them to follow up after they had accepted the invitation to review). So you need to pay for some infrastructure to either pull the paper back from the offending reviewer, or pay for automatic reminders and follow-up.

    I personally like the existing system as is - it works well for me and I can usually rest assured that the content which does finally get published is polished and ready to use. But I'll agree that the journal costs are not sustainable. What I'd rather see is that after 5-10 years, any federally funded research automatically becomes public domain. That way the journal publishers make their funds to sustain the quality of the journals (and I'm talking print quality here only) and the system continues to run smoothly, plus the public domain gets to build off of the results that we as taxpayers paid for.

  • agreed ... (Score:5, Informative)

    by oneiros27 ( 46144 ) on Friday March 01, 2013 @02:25PM (#43047339) Homepage

    He has no idea what he's talking about, as he only sees the problems at the surface. [xkcd.com]

    But there are some folks who have given better suggestions that are actually involved in the publication process. Take for instance Jason Priem and Brad Hemminger's article last year, "Decoupling the Scholarly Journal [doi.org]" (note -- which actually *was* peer reviewed, unlike someone using Slashdot as editorial / soapbox.)

    For those not familiar with the authors, Priem is one of the people behind the Altmetrics Manifesto [altmetrics.org], which argues for other way to measure the value of scientific articles other than h-index and impact factor. Unfortunately, a lot of tenure & promotion committees look at those as being their all important measure.

    There *are* folks working on the issue ... I'm involved with it from the side of data citation [virtualsolar.org]. Some of the societies care ... I know AAS (one of the societies I'm a member of) published a statement that they open access to anything 12 months old automatically, and have for years.

    But we've got it now where the publishers are paying the societies for the right to publish their journals ... and for societies who were losing members due to the recession, a few of 'em took the bait. It's going to take some time to figure out what the best models and infrastructure are for each discipline, who's going to pay for it, and for all of the existing contracts to run out.

  • by ceoyoyo ( 59147 ) on Friday March 01, 2013 @02:45PM (#43047589)

    There's also archiving. Someone has to keep those papers available so the scientific record stays intact. Many of the existing journals have also done a good job scanning old papers and making them available as well.

  • Re:Why tenure? (Score:4, Informative)

    by alexander_686 ( 957440 ) on Friday March 01, 2013 @04:27PM (#43048737)

    2 Factors which are specific to research institutions.

    Associate Professor / Post-Grad / Grad Student = indentured servant / long hours / low pay. Routinely ranked as a highly stressful. Full Prof means labs staffed by said indentured servants. Routinely ranked as a highly rewarding for the time /money.

    Up or Out: Most research institutions have a 7 year limit – either make full prof by that point or start searching for your next job in a new city.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...