Hoax-Proofing the Open Access Journals 114
Harvard biologist John Bohannon wrote about his experiment in an article published by Science Magazine. He submitted his deliberately bogus paper to 304 open-access publishers, including 183 that were listed in the Directory of Open Access Journals (DOAJ), which Bohannon calls the "Who's Who of credible open-access journals", and whose quality is supposedly vetted by the DOAJ staff.
Of the 304 open-access journals targeted by the sting, 60% published the paper. I think this mainly just shows that the average quality of open-access journals may always be low, but that's not surprising since anyone in the world can set up an "open access journal". That shouldn't be relevant to the reputation of the best open-access journals. If the best open-access journals acquire a reputation for high standards and proper peer review, then that will attract high-quality papers, whose publication will reinforce the reputation of the journal, which enables it to confer prestige on the papers it publishes, which in turn will continue to attract high-quality papers. The existence of other open-access journals with crummy standards, should be irrelevant.
What's more disturbing, is that of the 183 journals listed in the Directory of Open Access Journals, 45% of those published the paper -- which, according to Bohannon's article, surprised and disappointed the DOAJ founders. But perhaps if you're maintaining a database of thousands of allegedly reputable open-access journals, there's no way to make sure that they're all telling the truth about their standards and their practices. At a quick glance, all you can really say is that they would be good-quality journals if they're telling the truth about how they operate, but it's hard to tell from the outside whether they're being honest.
So perhaps a different solution is that we don't really need a huge number of good open-access journals. Rather, in each field, you could get by with a small number of "super-journals" which have a lot of reviewers on file, and which publish a high number of papers but apply uniformly high standards across all of them.
Consider: you have two journals, A and B. Each has their own non-overlapping database of 20 reviewers. When they receive a paper, the standard practice for each of them is to send the paper to 3 randomly chosen reviewers in their database. Each one receives 10 submissions per month.
Now combine A and B to form one single journal which has 40 reviewers and gets 20 submissions per month, and still sends out each submitted paper to three randomly chosen reviewers. The total amount of work performed by the reviewers, doesn't change. But now, if you're auditing the quality of a journal according to its adherence to its own practices, you only have to audit one journal instead of two. By the same logic there's no reason in principle that any number of journals in one field couldn't be subsumed into a few behemoths, which apply uniform standards across all their papers.
You could do this without waiting for the traditional system to be dismantled. Somebody in the field just assembles a list of people to be peer reviewers for the "virtual super-journal". That list is public, so that anybody can audit it and see that it consists of people with a credible reputation in their field. Anyone who pays the (nominal) fee can submit a paper to the VSJ, which sends the paper to a random selection of n reviewers from that list. If the paper "passes" the test, then it gets the stamp of approval of the VSJ, which says, "This paper was judged to be good by a majority of a random sample of reviewers on our list, and you can see from this list that the quality of our reviewers is pretty good."
And suppose someone wants to publish their paper in some other journal XYZ, and they also want to publish it in the VSJ just to get a certification of its quality, but journal XYZ doesn't allow them to simultaneously submit it to another journal for publication? In that case, you can still submit your paper to get the stamp of approval from the VSJ -- just pay the normal reviewing fee, and if it passes the VSJ's review process, they can list the paper on their website, saying, "This paper was judged to be good by a majority of our reviewers. We can't actually publish the paper here, because some other journal XYZ has exclusive publication rights, but you can view the paper at this link in this other journal." You still have the self-reinforcing cycle where the VSJ's stamp of approval maintains high standards, which attracts high quality papers, which reinforce the reputation of the VSJ's stamp of approval. There's no part of that cycle that requires the VSJ to actually "publish" the paper itself.
And people could subscribe to the VSJ's "stamp of approval" feed the way they subscribe to any other publication -- the VSJ can send out the papers themselves that they have the right to publish, or links to papers in other journals, saying, "This paper got our stamp of approval, and follow the link to read it here."
You could even use this process to do a "hit job" on someone else's paper that got published in another journal, but which you think is too low-quality to have been published. You can submit it to the VSJ and if the VSJ rejects it, you can ask them to list it as a paper that failed their review process. (Whether or not the VSJ would give you the option of doing this, may depend on their policies. It "sounds mean", yes, but academics are supposed to keep each other honest. I've never heard of a traditional journal doing that -- calling out a paper published somewhere else and saying, "This sucked, we never would have published it.")
There should probably be multiple open-access journals (or Virtual Super-Journals) within each field, so that the competition between them keeps them honest. But there's no reason to have such a huge number of them that the Directory of Open Access Journals can't keep track of what they're doing.
This seems overly complex. (Score:5, Informative)
Surely the solution is to have people who understand the papers actually reading them. And if nobody among you understands them then you don't accept them.
And if you don't do that, you don't really have an academic "journal", just a blog.
Re: (Score:2)
Oh, I don't know, you could perhaps use the hundreds of euros you charge the article authors?
Re: (Score:2)
It's not about how to run a journal, it's about how non-experts distinguish between properly run journals and shitty journals just trying to scam some money.
If I am buying a product, I distinguish the good products from the bad by reading the user reviews on Amazon. Why not just have user reviews for journals? Even better would be to have reviews and ratings for individual articles. A reader could select articles to read based on either all ratings, or only on ratings from reviewers using their real name and affiliation, to ensure the process isn't being gamed. Just like Amazon reviews, the reviews could point out flaws in the article, or provide useful feed
Re: (Score:2)
It's not about how to run a journal, it's about how non-experts distinguish between properly run journals and shitty journals just trying to scam some money.
If I am buying a product, I distinguish the good products from the bad by reading the user reviews on Amazon. Why not just have user reviews for journals? Even better would be to have reviews and ratings for individual articles. A reader could select articles to read based on either all ratings, or only on ratings from reviewers using their real name and affiliation, to ensure the process isn't being gamed. Just like Amazon reviews, the reviews could point out flaws in the article, or provide useful feedback for the author.
While I see your point, that's pretty much exactly what the point of peer review is. The difference with peer review is that you're supposed to be an expert in order to review the paper. In your scenario, what you'll end up with is a group of people claiming to be experts doing 90% of the reviewing. The whole point of a peer reviewed journal is that somebody is verifying (to some degree at least) that the reviewer is actually an expert on the topic. This is very different than amazon reviews, where the
Re: (Score:2)
While I see your point, that's pretty much exactly what the point of peer review is.
No it isn't. When I read a journal, I can not see what the peer reviewer wrote. The peer reviews are not published with the article. If one of the peer reviewer expressed some doubts, or pointed out some problems with the methodology, that information is lost when the the article is published. Furthermore, no one has the ability to add new reviews if an article is later shown to be reproducible, or is even later confirmed. The information is the journal is fixed and static.
Just look at slashdot comments on nuclear power...everybody sounds like an expert in nuclear physics and in politics. Would you want those commenters reviewing a paper on the topic of nuclear physics, or would you want somebody actually verifying that the reviewers are nuclear physicists?
I have learned far more about
Re: (Score:2)
The information is the journal is fixed and static.
That depends on the journal, e.g. this journal Source Code for Biology and Medicine [scfbm.org] offers the option to comment on articles, just like all the other Biomed central journals [biomedcentral.com], or the PLoS [plos.org] journals.
Re: (Score:2)
Why not just have user reviews for journals? Even better would be to have reviews and ratings for individual articles.
In a way that has already been done: impact factor.
http://en.wikipedia.org/wiki/Impact_factor
The number of times a journal article is cited by other articles/authors is a rough guide to how influential it is in the field, the same goes for the journals themselves. It's not foolproof, but not bad.
Re: (Score:2, Informative)
Lest we forget, a large number of submissions from the paid journals had data that was not reproducible [wsj.com]
Re: (Score:2)
That doesn't make it fraudulent.
But it does make it wrong. The journals will usually only retract an article that is fraudulent, but not one that is wrong. So the article is still available to be read, but with no indication that it is wrong.
Re: (Score:2)
No retracting "wrong" research, it simply gets labeled as "replication failed".
Seriously?? What journal does that? Can you show me a journal that has attached a "replication failed" label to an already published article?
Re: (Score:2)
Lest we forget, a large number of submissions from the paid journals had data that was not reproducible [wsj.com]
I admit my eyes started to glaze over and I didn't finish reading TFS because it seemed like a lot of hand waving and busy work to no real efect, but isn't this proposal, well, a lot of hand waving and busy work to no real effect?
The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.
But don't all serious fields have that already? Getting through the review process for publication is just the first step--not all published results are inducted into the cann
Re: (Score:2)
The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.
But don't all serious fields have that already?
No, they do not. I have never heard of a reviewer trying to reproduce the results. I have reviewed plenty of papers. I will spend about 2-4 hours reviewing something that took the author months of work. All I do is read the paper, make a recommendation, write a few paragraphs of feedback, and email it back to the journal. That's it. This is an unpaid process. There is no way I am going to put my own work on hold for several months to repeat the experiment.
Re: (Score:2)
The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.
But don't all serious fields have that already?
No, they do not. I have never heard of a reviewer trying to reproduce the results. I have reviewed plenty of papers. I will spend about 2-4 hours reviewing something that took the author months of work. All I do is read the paper, make a recommendation, write a few paragraphs of feedback, and email it back to the journal. That's it. This is an unpaid process. There is no way I am going to put my own work on hold for several months to repeat the experiment.
Well, I hope you do better with the papers you recommend than you did with my comment. If you continue to the sentence after the one you quote, you'll see I'm talking about what happens _after_ a paper is published--other researchers attempt to reproduce the results, either directly or indirectly.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
That's why I'm recommending having a smaller number of journals with a larger output and a larger number of reviewers on call, because then you have a centralized point where the procedures can be enforced.
Why the hell not? (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
You read the summary? My eyes went on strike a quarter of the way through.
Re: (Score:2)
You read the summary? My eyes went on strike a quarter of the way through.
I wonder if this was one of those hoax submissions the author waves under our noses in his first paragraph. /. readers by telling them a story about hoax submissions."
"Lets see is we can hoax those
Re:Harvard biologist commits Fraud better title. (Score:5, Insightful)
He didn't set subtle traps that somehow slipped past vigilant reviewers. He included deliberate, very basic, glaring errors that showed no meaningful peer review had ever taken place. It's more like being fleeced by a con artist with 'This is a Con!' tattooed on his forehead - if you're taken in, you really only have yourself to blame.
Re: (Score:2)
Thank you for setting this straight. I wish I had mod points...
Three blind mice. (Score:3)
The Journals all assume that the author is acting in good faith and believes his result. They are 'peer' reviewers, not Police.
Meaningful peer review demands an intelligent evaluation of the author's arguments and evidence and the clarity with which they are presented. Good faith does not imply good science. Belief does not imply good science. Neither faith or belief implies good writing and sound editing.
Economics (Score:2)
Re: (Score:3)
Yep, a lot of these "journals" are the academic journal equivalent of diploma mills. They accept any piece of garbage you submit to them, then you get to add said piece of garbage to your resume/c.v., postdoc/grant/tenure application, etc.
Re: (Score:2)
Maybe these journals need to take a clue from your field.
Most colleges and even some high-schools run student papers through software packages to detect plagiarism and
structural problems. In fact most careful students do the same thing before submitting papers.
You would think there would be a first layer of software detection that could throw up enough red flags to sent the papers back without any wasted efforts. Not JUST on plagiarism (which might be harder to check, since quoting other studies in a paper
Maybe, the "greedy" journals have a point (Score:3)
The established publications — often denounced as "greedy" for having the audacity of wanting to get paid — do add value, after all?
Next in the news: a private farm's crop beats the yield of a communal field.
Re:Maybe, the "greedy" journals have a point (Score:4, Informative)
Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.
Re: (Score:3)
Well, credibility is probably the wrong word. It's Linus' "enough eyeballs" principle in paper form.
Re: (Score:2)
They seem to select for papers which will have a bit impact in the general press rather than necessarily for good ones.
Well, at least Nature admits as much in their Mission Statement:
First, to serve scientists through prompt publication of significant advances in any branch of science, and to provide a forum for the reporting and discussion of news and issues concerning science. Second, to ensure that the results of science are rapidly disseminated to the public throughout the world, in a fashion that conveys their significance for knowledge, culture and daily life.
Even big discoveries in small fields with no significant tie in to wider fields of science or daily life will not be accepted, which is fine. Their mission statement is fairly clear on that point. They are not focused field specific journals.
Re:Maybe, the "greedy" journals have a point (Score:5, Interesting)
Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.
Actually, no journal is fully immune, no matter how prestigious. Worse than that, top-notch journals like Science and Nature require sensational stories, which makes them more likely to publish skillfully hyped-up reports than honest ones that acknowledge their own limitations. The best science is often found in mid-range journals that accept longish and seemingly boring manuscripts, with nothing swept under the rug.
Re: (Score:2)
Wouldn't this be the place to list good examples? I'm most curious... Thanks!
Re: (Score:2)
Wouldn't this be the place to list good examples? I'm most curious... Thanks!
Start here, http://en.wikipedia.org/wiki/Nature_(journal)#Controversies [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.
Wait, you're saying Nature and Science don't get frauded?
Seems to me both of those Journals were taken in by Jan Hendrik Schön.
And paid journals also have turned down key papers, including Nature's total snub of Watson and Crick's 1953 paper on the structure of DNA.
Re: (Score:3)
AFAIK the protest against Elsevier was not about them wanting to get paid but about monopolies. Monopolies tend to want just a bit more than getting paid.
Re: (Score:2)
My recollection is, more than a single publisher was targeted by the "information wants to be free" folks...
Re: (Score:2)
Well, then, you don't recall very well some of the major issues for which Elsevier was targeted. Such as printing fake shill journals under the Elsevier label just for articles produced by Big Pharma PR, so that pharmaceutical companies could pass requirements for "peer reviewed" studies of their products with a citation to a "peer reviewed" study (in their privately purchased Elsevier journal). The Elsevier upper management are profiteering crooks, plain and simple, which is a pity, because they've bought
Re: (Score:2)
The fraud you are referring to is, indeed, reprehensible, but the publisher (and I remain convinced, other publishers were blamed too) was targeted simply for wanting profit. Indeed, you are doing just that right now:
"Profit" seems to be a dirty word for you...
Re: (Score:2)
Why yes, "profiteering" is a dirty word to me. Saying "we care less about the human lives potentially harmed by willfully publishing potentially fraudulent medical information in order to grab a few bucks" is among the most despicable things someone in their position would be able to do. Maybe you have lower standards for what constitutes crimes against humanity. Are you waiting for Elsevier's management to personally strangle your children before your eyes before passing judgment?
Re: (Score:2)
The boycott of Elsevier was primarily related to their "bundling" of journals---the act of forcing libraries to buy subscriptions to their low-impact, narrowly focused, but very expensive journals in order to have access to their high-impact, high-circulation journals. See http://www.nature.com/news/elsevier-boycott-gathers-pace-1.10010 [nature.com]
Think about this process.
Elsevier prints journals for which they receive their content FOR FREE from academic researchers, most of whom are funded by taxpayer money. They the
Re: (Score:1)
You mean we should all published in flagship journals such as "chaos solitions and fractals" or "Australasian journal of bone and joint medicine". The Elsevier brand guarantees they're good journals!
Re: (Score:2)
The established publications -- often denounced as "greedy" for having the audacity of wanting to get paid -- do add value, after all?
We have no idea if they do or not, because the "study" deliberately omitted traditional journals. Congratulations, you got played for a sucker.
Discredit the scientists (Score:3)
Re: (Score:1)
It doesn't matter if it is an "exclusive" journal or one that is open access. If a scientist submits fake data to a publication, shouldn't the scientific community take the time to verify his results?
Yes, once it's published. That means, BS in a "reputable traditional" journal will cause that, BS in an unknown open access journal will just be ignored.
But that's the case for published papers. Submitted papers are just vetted by a couple reviewers. And they might get paid the same for open access or traditional: zilch. The difference: Traditional has better connections to experts in the field. Who will reject anything that is out of the ordinary (either flawed or just against standing assumptions.)
slashdot science journal (Score:1)
Slashdot engine should be used to maintain author and paper karma points, moderation and meta-moderation! Yeah?
Re: (Score:2)
Oh yes... A hivemind with knee-jerk reactions is great for science!
Re: (Score:2)
Okay. Look. We both said a lot of things that you're going to regret. But I think we can put our differences behind us. For science. You monster.
Re: (Score:2)
99% of those problems can be solved by blocking all IPs originating south of the Mason-Dixon line.
harvard Hack-ologist John Bohannon (Score:5, Informative)
Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups. If he were honest, he would have submitted the same papers to closed, for-profit journals as well, even if it cost him money to do so.
Re: (Score:2)
Good point. And it wouldn't even have cost money, because for the journals I know of, you don't pay anything until the paper is accepted.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Bohannon's not addressing a problem with open access, since closed-access journals have printed junk articles in the past as well. He's addressing his confirmation bias by setting them up for a James O'Keefe-style hit piece, minus the fake prostitute.
Re: (Score:2)
Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups.
The "control group" isn't necessary if the only question being asked is whether the open access journal would publish a paper that is utterly ridiculous, absolute nonsense.
You can call the call the experiment unfair, if you like.
But that is cold comfort for suppporters of the open source model.
Re: (Score:2)
It is if you want it to be something more than a tabloid hit piece.
Re: (Score:2)
Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups. If he were honest, he would have submitted the same papers to closed, for-profit journals as well, even if it cost him money to do so.
Yes, his study was not scientifically valid. Now his only option will be to publish his paper in an open access journal, instead of closed, for-profit journal* that only accepts scientifically valid papers.
*The summary was based on an article in Science that simply described his experiment, not a peer reviewed scientific paper in Science.
Wrong from the start (Score:5, Insightful)
you have fewer journals that have to be audited for procedural honesty
Taking this to its logical conclusion, a monopoly is the most honest organization, right?
Once one of these "fewer" journals has an established reputation, it can obscure its procedures and refuse to be audited, while it turns corrupt for profit. Since it's still a well-known journal (because who really has time to monitor the procedural audits, anyway?) it will still get the submissions and readers, and it will stay relevant for many years after "everybody knows" its' corrupt.
Traffic Light System (Score:3, Interesting)
Modern journals are all online. So, it would be easy to provide a traffic light system that indicates if a reference is found to be fraudulent or incorrect. In a correct paper, all references would have a green background. If one of those papers was found to be incorrect by a 3rd party, it would be flagged and its colour changed to amber. This would propagate to all papers that make a reference to that paper and the entire chain would become amber. This would force all authors to update their papers, or the author of the original flagged paper to correct their work. If a paper is just flat out wrong, discovered to be a fake, or fails to be updated after a period of time in amber, it would become red, which again would propagate to every paper that uses it as a reference.
This will keep the chain of dependencies clean throughout the entire scientific world and minimise the impact of improperly peer-reviewed work.
Re:Traffic Light System (Score:4, Interesting)
A little subtlety would be required. In a document I'm working on, a whole load of references are papers containing flat out wrong, erroneous information (honest scientific mistakes, not deliberate fraud, but disproven material nonetheless). In the text, I'm using these as examples of things that have gone wrong in the past history of the field. Would my paper, with its bibliography littered with "red" entries (that I put there because they were "red," in order to comment on the scientific process --- which involves making and correcting mistakes --- leading to the present-day state of the field) be tagged as "red"? What if I reference a paper in which some material has held up under later scrutiny, and other hasn't (basing my work on the more "correct" material)?
What about subscription journals? (Score:2)
This paper has already been extensively critiqued. To me the biggest problem is that he didn't include any subscription journals.
Many intentionally flawed or nonsense papers have been submitted -- and published! -- to reputable journals in the past.
This latest demonstration by Bohannan just shows that the peer review system needs improvement. It does not show whether Open Access journals are better or worse than subscription journals in terms of quality and reliability of content.
Re: (Score:2)
Yup. You could rephrase the conclusion as: "there are an awful lot of crappy open access journals out there". It doesn't say anything about a difference in that respect between OA and non-OA journals.
Do paid journals never get hoaxed? (Score:4, Insightful)
Is this information really meaningful without a similar test on the paid journals?
Re: (Score:1)
Workload! (Score:2)
Re: (Score:1)
Re: (Score:2)
wrong premise (Score:2)
Anybody who expects peer review to let through only correct papers doesn't understand peer review. Science editors, of all people, should understand that, given the many bogus and fraudulent papers they have published over the years.
In different words, there's nothing to fix here.
Write Only Science (Score:1)
The publication process has gone so far downhill it's basically not recognizable as science any more. This is driven by the university tenure process. Being a tenured professorship is a sweet job. The hours are short and flexible and the work is interesting and varied. Pay is less than industry, but once tenured the pay is guaranteed. Benefits are usually top-notch. That's an appealing package for anyone of reasonable intellect, middling ambition, and a desire for ironclad security. Not surprisingly,
If only StackOverflow made an Open Access Journal (Score:1)
Most open access journals are frauds (Score:2)
Ok, maybe "fraud" is a slightly strong term, but it's pretty close. There are thousands of "open access journals" created only to make money by sounding as if they were legitimate journals, getting people to send them articles, and then charging to publish them. I get spam from them on a daily basis. They aren't legitimate, they have no interest in quality, and they have no reason ever to reject an article.
Please don't confuse legitimate open access journals like PLoS with these scams.
Make peer review better (Score:1)
Disclosure: I'm the co-founder of Publons.com
Really good post. I think you've hit on the key issue, which is that peer review can be done better.
One problem is that peer review probably is done well in a lot of cases; we only hear about it when the system breaks down. From that perspective the most obvious solution is to start focusing on peer review and giving reviewers credit for the times where they do a good job. One way we're doing that is by assigning DOIs to post-publication reviews that the commun