Peer Review Highly Sensitive To Poor Refereeing 233
$RANDOMLUSER writes "A new study described at Physicsworld.com claims that a small percentage of shoddy or self-interested referees can have a drastic effect on published article quality. The research shows that article quality can drop as much as one standard deviation when just 10% of referees do not behave 'correctly.' At high levels of self-serving or random behavior, 'the peer-review system will not perform much better than by accepting papers by throwing (an unbiased) coin.' The model also includes calculations for 'friendship networks' (nepotism) between authors and reviewers. The original paper, by a pair of complex systems researchers, is available at arXiv.org. No word on when we can expect it to be peer reviewed."
This reminds me of something else (Score:3, Interesting)
I can't quite remember what it was, but I seem to remember seeing it everywhere. It was exactly like TFA article, though. [wikipedia.org] Damn, what was that place called again?
Re: (Score:3, Interesting)
Wikipedia is completely different. There, you submit your work in whole and anonymous "referees" proceed to secularly mutate your effort into an intellectual monstrosity of its former self. Nothing is sacred. Some ignoramus actually removed all chemical equations from the Smelting article [wikipedia.org]. At least in peer review, referees simply make suggestions which you yourself implement.
Re:This reminds me of something else (Score:4, Funny)
Some ignoramus actually removed all chemical equations from the Smelting article.
At least they didn't replace the whole thing with "Who smelt it dealt it".
The climate skeptics will have a field day (Score:4, Interesting)
This is precisely what the global warming skeptics say is happening with the global warming alarmist community. ie. scientists review each others' papers, in a 'co-operative' manner as it were.
I think I'll point some skeptics at this paper and then sit back with a bowl of popcorn and watch what happens.
Re: (Score:2)
They probably won't believe it's real, and will accuse you of trying to make them look foolish.
Re: (Score:3, Insightful)
The funny thing is, the skeptics suffer from the same problem.
I hope the moderates don't, otherwise were borked.
Re:You mean whine when a POS paper is printed (Score:5, Insightful)
I agree with all that you said, but for heaven's sake, use full-forms at least once
For those as confused as I, the paper the parent refers to is "Gerhard Gerlich, Ralf D. Tscheuschner. Falsification of the atmospheric CO2 greenhouse effects within the frame of physics.".
Contains such howlers as "There's no such thing as average temperature"... RC Wiki page [realclimate.org]
Re: (Score:2)
Lacking mod points, or even the ability to mod this thread at this point, I'll just say: thank you.
Re:You mean whine when a POS paper is printed (Score:5, Interesting)
Contains such howlers as "There's no such thing as average temperature"...
Why is that a howler? It seems like uncontroversial physics if by "no such thing" you mean it has no physical meaning.
If I take two bodies, one with temperature T1 and one with temperature T2, what is their average temperature? If you say (T1+T2)/2 you are mathematically correct, but thermodynamically incoherent.
There is in general no thermodynamically meaningful way of averaging temperatures, which is why we should be talking about atmospheric heat content, not temperature, and why ocean temperatures--which are rising--are by far the most plausible evidence for increasing heat content in the troposphere.
Re: (Score:3, Informative)
Average temperature between two different (widely separated) points might be meaningless, but the average of a continuous measurement is definitely significant. Even spatially, average temperature has a physical meaning. For example, the average surface temperature of the sun is 6500 K, though if you measure at various points, you may get more or less than that.
In any case, that's only one of the many "interesting" ideas in that paper...
Re: (Score:3, Insightful)
I think I'll point some skeptics at this paper
Let me know when you find some. I mostly meet deniers, with a deep ignorance of climatology or any other science and a deep conviction of a conspiracy.
If you locate some actual skeptics, people capable of analyzing the evidence, who have come to the opposite conclusion of the vast majority of actual climatologists, I'd love to hear from them.
Re: (Score:2)
If you locate some actual skeptics, people capable of analyzing the evidence, who have come to the opposite conclusion of the vast majority of actual climatologists, I'd love to hear from them.
What about people who aren't skeptics, but are damned tired of the whole thing being hijacked as a way to sell people on junk ideas.
All the good intention in the world won't do you any good if the 'fix' isn't practical.
Re:The climate skeptics will have a field day (Score:4, Insightful)
Well, peer-review the junk ideas out (aka, vote on the economic aspects). The science is pretty well settled, but the economics is not so clear (as if it ever really is).
Come up with better solutions, implement them if you can, support good solutions if you can't. The problem isn't going away by denying it because you don't like the currently proposed solutions.
That, we can discuss. "Global cooling in the 1970s" is just noise in the channel.
Re:The climate skeptics will have a field day (Score:4, Insightful)
The thing is that the "fix" is plain and simple, it's just rejected out of hand by the Envirowhackos because it doesn't involve government running our lives, a reduction in the standard of living, and allows for more growth and prosperity: Nuclear power.
Now comes all the posts about peak uranium that ignore technology like breeders and thorium.
Then comes everyone who thinks all reactors are built like Chernobyl.
Next are high level waste folks who don't understand what reprocessing does.
Last, but not least, are all the people who equate a nuclear reactor with a nuclear bomb.
Re:The climate skeptics will have a field day (Score:4, Insightful)
Oh please. Let's face it, it's easy for the government to ignore environmental concerns; they've been doing that for years. The real barrier is the general public that's okay with nuclear power as long as the power plant isn't near their neighborhood, as long as trains carrying fuel or waste don't go anywhere near their house. They'd love them some cheap electricity, sure, but just build it near some other people.
Re:The climate skeptics will have a field day (Score:4, Informative)
Speaking as someone who's at least read up on this stuff (not an expert, but definitely an informed layman), such large-scale adoption of nuclear power comes with its own problems. For one, building such plants is going to be extremely costly, and probably can't be done in time to make a useful difference.
You talk about reprocessing, but even after that, you eventually end up with some radioactive waste products, to say nothing about radiation leakage into the environment.
Finally, I think it's just yet another "all our eggs in one (radioactive) basket" solution. I'd rather have a wide range of options, from renewables like wind, solar or geothermal, to, yes, nuclear power where that's appropriate.
It's difficult to comprehend why a place with ample local generation capability (say, solar power in the Thar desert in India) should go with an expensive nuclear power plant, when the alternative is cheaper, and a more efficient use of resources readily available (as opposed to resources mined from the ground a few thousand kilometers away in another continent), as you "nuclear only" types keep coming up with.
Re:The climate skeptics will have a field day (Score:4, Informative)
Re: (Score:2)
There were quite a few, like Friis-Christensen [wikipedia.org] who actually raised real scientific questions, but most of those have since come to the conclusion that global warming is indeed caused (primarily) by human influence
On the other hand, every Real Scientist is a skeptic; if someone claimed, whether in a scientific paper or otherwise, that global warming would cause the ice-caps to melt by Dec. 21 2012 or something like that, you can bet that the entire scientific community would pretty much laugh them out of the
Re: (Score:3, Informative)
Re:The climate skeptics will have a field day (Score:4, Informative)
As usual, the strong caveat at the end of the article goes unnoticed:
But Tim Smith, senior publisher for New Journal of Physics at IOP Publishing, which also publishes physics world.com, feels that the study overlooks the role of journal editors. "Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn't ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes,"
IRL the reviewers are not chosen at random. Which burns the straw men built by the summary, most of the article, and the skeptics.
Highly political subjects? (Score:4, Insightful)
"The system provides an opportunity for referees to try to avoid embarrassment for themselves, which is not the goal at all," he says.
So, if a reviewer sees a paper that has actual data and a conclusion that goes against the consensus of the scientific community, the reviewer may reject it for fear of appearing foolish? Or rejecting someone just because of their publicized personal beliefs?
Here's a hypothetical, a climate scientist who's an openly devout Christian finds data that sheds doubt on human caused global warming will be rejected because someone's afraid of looking foolish.
That's the way I'm interpreting this study.
Re:Highly political subjects? (Score:5, Insightful)
Can anyone give me a good reason why the reviewers get information about the author in the first place? Granted, there are disciplines that are closed knit to the point that the reviewers would recognize the author based on their past work, but in most cases I would think not knowing who the author is would address at least some of the issues that they highlighted here. It's hard to obscure the rest of the review process but limiting nepotism should be relatively simple.
Re:Highly political subjects? (Score:5, Informative)
It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.
Confirmation bias (Score:2)
Re: (Score:2)
In Computer Science, top-tier conferences are higher quality than most journals. The admission is determined by a program committee, whom are carefully selected by the program chair because of what they'll bring to the table.
I've only served on one PC, but can't imagine trying to serve on a program committee they describe in the the paper, with 1/3 "rational" (i.e., self-serving) people and 1/3 "random" (i.e., can't tell a good paper from a bad). Of course you'd get essentially random. But if that happe
Re:Highly political subjects? (Score:5, Interesting)
As a computer scientist, my impression is that the program committees really are pretty random, or at least based on some sort of preference other than a widely agreed "quality" standard. Try it sometime: resubmit a paper rejected from a top CS conference verbatim to another top CS conference. The correlation between the reviews is usually quite low, both in terms of the numerical scores, and especially in terms of what they liked / complained about.
Re: (Score:2)
Agreed. And don't forget that top conferences have a very low acceptance rate therefore bad reviewers have a more damaging effect. If you accept 15-20% of papers then even the smallest bias is dangerous.
Re:Highly political subjects? (Score:4, Interesting)
The poor review assignment at large conferences contributes to that effect as well, I think. I almost always have at least one of three reviewers, and sometimes even two of three, give a noncommittal review along the lines of, "well this isn't really my area, but it seems pretty good". Those reviews basically are non-reviews, so the acceptance decision is then entirely up to the remaining one or two reviewers. So it often comes down to: did the one person who actually provided an opinion on your paper like it or not like it?
In my experience that's often pretty subjective, especially for conferences with tight length limits (standard in AI is six pages). If the reviewer personally found the paper to be on an interesting subject with an interesting approach that he/she felt should be investigated, almost any shortcomings can be excused, and the reviewer will conclude that "Overall, this paper provides a valuable contribution to an important ongoing discussion in this area." But if the reviewer doesn't like it, finds it boring, dislikes the approach, etc., it's easy to find something that had insufficient detail, didn't sufficiently distinguish from related work, didn't sufficiently motivate the problem or investigate/validate the applications, etc., etc., since you really can't fit that much in six pages.
Re: (Score:3, Informative)
It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.
Whether the authors are revealed to the reviewers or not varies from journal to journal. All of the large handful of reviews that I've done had the author information presented to all of the reviewers; I've not reviewed for really big name journals though (at least not yet). The reviewers' identities are not made known to the authors, though. It is often, however, rather easy to identify the reviewers because my field is not that large, and personalities can shine right through unedited writing like revi
Re: (Score:3, Funny)
Wow, this gave me an idea! Researcher mimicry :)
1. Find a successful researcher R in your field
2. Find a journal/conference J in your field that anonymizies submitters
3. Make a language profile L(R) of researcher R
4. Make a paper P so that the profile L(P) is similar to L(R)
5. Select a subset of citations from R and cite them in P
6. Submit P to J
7. ???
8. Profit
Re: (Score:3, Insightful)
With such a small sample size - there's no such thing as an outlier. There is still selection bias and confirmation bias though, as you so aptly demonstrate.
Re: (Score:3, Informative)
Re: (Score:2)
Malcolm Gladwell [wikisummaries.org] would agree with you.
The ironic part of this is that you need not look further than human behavioral scientists to help solve this problem. It is also possible that the whole idea of anything human-based being "non-biased" is a fantasy made up to represent an ideal that will never happen. Humans are just biased to their physiology and environment. End of story.
Re: (Score:3, Informative)
Re:Highly political subjects? (Score:5, Insightful)
Granted, there are disciplines that are closed knit to the point that the reviewers would recognize the author based on their past work
Pardon my naivete, but I don't think such fields should exist. They present an extremely hazard for groupthink and inbred rubber-stamping.
Any speciality should bend over backwards to maintain close ties with the surrounding fields of research so that others will understand how it relates and better be able to detect when bad practices are becoming standard. And it is vanishingly unlikely that this super sub-speciality will *never* stumble upon a problem isomorphic to a well-studied one in a distant field.
It's because of this "oh this is a hard subfield, stay off my turf" mentality that causes things like ecologists *just now* starting to use the method of adjacency matrix eigenvectors (i.e. PageRank) to identify critical spiecies, despite the method having been known to mathematicians for 40 years.
Hey scientists: science is a group process. You're special, but you're not that special. Please build off of the existing work. Don't compartmentalize. Good science connects, and connects deeply. Yours should too.
Re: (Score:3, Funny)
Exactly. Look how hard it has been for that TimeCube guy to get published just because this reviewers were educated stupid.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
>>>a climate scientist who's an openly devout Christian finds data that sheds doubt on human caused global warming will be rejected because someone's afraid of looking foolish.
Nothing that extreme. More like they would reject papers that claim "global warming caused by natural causes" and accept papers that say "global warming caused by man", in order to protect their Own beliefs. A guy named Thomas Kuhn wrote about this very phenomenon (protecting the current paradigm aka worldview) several deca
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
I remember hearing about a nutrition paper that was rejected from a medical journal for a reason along the lines of "That can't possibly be true." So the guy updated the paper with an explanation of the basic bodily functions involved and how they work, which shows exactly why it could happen, and still rejected. He submitted it to a different paper where they basically said "This looks sound, we'll publish on the condition that you remove the explanation. Any doctor would know this already." The paper
Re: (Score:3, Informative)
Of course it is. But that doesn't mean that peer review is worthless.
Remember, a large enough (I'd say a majority, but I haven't actually done the numbers to claim that) number of people who get into science are doing it because they care passionately about their field. Eventually, the best of the breed floats to the top, and is distilled to give us things like PageRank and better safety in automobiles (see, I worked in a car analogy too!)...
Review content matters (Score:4, Informative)
When you're talking about scientific papers, a "bad apple" reviewer may be able to skew the record in terms of 1-10 scales, but reviewers also do a qualitative write-up of the material. That's really the only important part and if one or two people fall outside the line of general consensus, they'll just be ignored.
Re:Review content matters (Score:5, Interesting)
Regardless of all this though, sometimes you'll find out that only two of three reviewers responded, and at least one of those probably got one of their postdocs or even a PhD student to do the review. Some reviews will have empty parts where a reviewer was supposed to write a paragraph but couldn't be bothered, or because they didn't want to reveal the fact that they were totally unfamiliar with the subject matter. Getting a journal paper published is more hit and miss than you'd think. I used to think that a good paper with good ideas was enough, but it's not always the case.
Re: (Score:2)
We're not talking about the reviewers, we're talking about the referees. The guys who let papers in the journal so that they can be reviewed.
If the paper never gets published, how will reviewers ever see it?
Re: (Score:2)
Re: (Score:2)
Yeah, I was actually talking about the editors rejecting it without sending it off for review.
My bad.
Re: (Score:2)
There were similar ideas developed in the 1960's in an unrelated field, and the paper should include a comparison
I've been suggesting for a little while that papers submitted to journals should not include a related work section, and that it should be the job of the reviewer to write it. This has several benefits:
Will this change anything? (Score:2, Troll)
Since the scientific community is so very obsessed with peer review, will this study actually modify the standard procedure?
Of course, all I can think of is: gee, I wonder if this has had any impact on all the climate change studies that are constantly contradicting each other...
Re: (Score:3, Insightful)
There is an aforism: Democracy as a form of government is riddled with problems. However we are yet to invent anything better.
Same with the peer review. It has its problems. However, we are yet to invent anything better
Re: (Score:3, Funny)
Well, I thought i had developed a better system but the thesis was shot down in peer review.
Re: (Score:3, Insightful)
Damn, you're an asshole.
surprises... (Score:2)
Not to mention "autarkistic" research communities (Score:3, Interesting)
I mean scientists who publish among themselves, i.e. inside their narrow specialty, in their own journals, without checking whether the problem at hand has been solved elsewhere. This is more and more common as people get more specialized, and can lead to very basic errors propagated inside the whole community, like rheologists believing in the existence of pure elongational flow (a trivial misunderstanding of tensor algebra). Since the peers reviewing the papers are members of the same community, those errors usually get unnoticed.
Re: (Score:2)
It's part of automating the process. (Score:5, Interesting)
Just this week, I was asked to peer review a paper in which I was mentioned in the Acknowledgments. The request was sent out automatically -- the journal has records of all their authors, and the keywords for this paper matched the keywords in my profile, so I was picked to review it.
I recused myself, but really I should never have been asked. If they're going to handle the peer review process automatically, the artificial intelligence that makes the decisions needs to be improved.
Re:It's part of automating the process. (Score:5, Insightful)
Except that I've heard of people deliberately adding people to acknowledgements to try to make sure they don't get those people as referees (and it hasn't worked)!
Re: (Score:3, Insightful)
If they're going to handle the peer review process automatically, the artificial intelligence that makes the decisions needs to be improved.
I don't think it's a massive problem, it just relies on people being ethical about declining to review something if they have an interest (in the legal sense) in the work. People have a lot to lose if they try and cheat the system and get caught.
The Social Text Affair (Score:3, Interesting)
The great anecdote demeaning peer-reviewed journals is The Social Text Affair [nyu.edu], where a prominent peer-reviewed journal published with enthusiasm the article "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity", only to be informed it was, in fact, computer-generated gibberish submitted as a joke.
Re: (Score:2)
Peer review only works in fields where the "peers" are not gibbering idiots ;-)
Re:The Social Text Affair (Score:5, Informative)
Social Text is emphatically not prominent nor was peer-reviewed at the time of the affair.
http://en.wikipedia.org/wiki/Sokal_affair [wikipedia.org]
While the Sokal affair is interesting, it has nothing to do with the matter at hand.
Re: (Score:2)
Much more relevant would be the SciGen papers that have actually gotten through peer review.
Important limitations in the Model... (Score:3, Interesting)
There are a couple of significant and important limitations in the model:
a) It assumes only two reviews per paper, and that the reviews are pure boolean, and that reviewer types are also pure and reviewers are randomly selected (when two of the classes of reviewers, 'mythantropes' (always reject) and 'altruists' (always accept) are specifically selected against by editors and PC chairs based on reputation).
b) It does not consider the cases (such as conferences) where there is a program committee meeting and the papers are not just considered on their own, but gone through a relative ranking process.
Bit of an arbitrary model (Score:4, Informative)
Each reviewer produces a binary recommendation within the same timestep: ’accept’ or ’reject’. If a paper gets 2 ’accept’ it is accepted, if it gets 2 ’reject’, it is rejected, if there is a tie (1 ’accept’ and 1 ’reject’) it gets accepted with a probability of 0.5.
If a single 'bad' reviewer (i.e. one that gives the 'wrong' answer as determined by the 'correct' method of reviewing mentioned as a control in the paper) can cause a paper to have a 50:50 chance of acceptance or rejection it doesn't seem too suprising to me that a relatively small number of them could cause the process to become '[not] much better than by accepting papers by throwing (an unbiased) coin' - because in their model, in the case of a reviewer disagreement, that's exactly what is happening!
This is not news to scientists (Score:3, Insightful)
The peer review system is great for regulation, standardization and unification. However, all scientists that I've worked with/researched with/spoken with much about this topic admit that the system can be annoyingly flawed by group think and conformity. One bad apple ruins the bunch, right?
The good news? While this part of the scientific community is not immune to problems, the slack is picked up elsewhere: As long as methods, data and results are transparent, reproducible and published we can actually have quality science.
I often speak to people about scientific research and they're shocked that it's not full proof. This is kind of like buying software (perhaps even a Microsoft product) and finding that it's not perfect. Science is done by committee and progresses slowly. "If we know what we were doing, it wouldn't be called research" ~Albert Einstein
Then again, I'm an idiot....
Re: (Score:2)
I often speak to people about scientific research and they're shocked that it's not full proof.
I don't quite understand what you mean by "full proof". You mean they don't recieve the full paper, or that the peper isn't complete, it isn't fully proofread, or what?
Then again, I'm an idiot....
If you're a researcher it's highly unlikely that you're an idiot. Even if your sig is correct.
Re: (Score:3, Informative)
Yeap, old news.
http://www.genomeweb.com/peer-review-broken [genomeweb.com]
http://www.slate.com/id/2116244/ [slate.com]
All it takes is one bad reviewer that doesn't know what he's talking about, or only skimmed over the paper, to get a paper rejected.
Doing something unprecedented (Score:5, Interesting)
I actually read the comments on TFA, and down at , there's a particularly interesting one: [physicsworld.com]
This study overlooks not only the role of the editor, but also the process in which the authors are able to answer the referees' objections. When the referees are competent, this leads to better papers through useful suggestions. On the other hand, when they aren't, overcoming the exasperation of the authors, their objections are easily brushed away, and the paper eventually gets through. Also, when the case is particularly contentious, there's still the option of calling for an adjudicator. In summary, the peer-review process is far more complex than this simulation might suggest. On the dark side, I’ve also noticed that referees are sometimes reluctant to object papers from certain renowned authors. The human factor is hard to remove. I guess many people will agree that there’s a need to look for better approval systems, specially today, when there’s an explosion of submissions. However we must also acknowledge that the present system has served its purpose of maintaining a certain quality.
There's actually a reasonably intelligent discussion going on in there...
Re: (Score:2)
There's actually a reasonably intelligent discussion going on in there...
I'm not sure I'd be comfortable on that site then. I prefer the casual trolling, mis-modding and general idiocy right here on /.
Using Professors' Egos to your advantage... (Score:5, Interesting)
Is this news to scientists? (Score:4, Insightful)
Peer review only works if the reviewers can be trusted and don't form a clique to get their work in and keep other people out. Surely anyone with even basic knowledge of human psychology would understand this?
Re: (Score:2)
Many scientists don't spend much time thinking about the peer review process itself. Maybe they question it when a good paper of theirs gets turned down, or when a bad paper they disagree with gets published, but they don't spend a lot of time thinking about what could replace it.
Indeed, what COULD replace it? No review at all? A system where you get to strike one reviewers comments?
Anyway, for most scientists, it's just something that exists and you just deal with it. I certainly have no intention of t
Peer review RIP (Score:2)
"a good paper of theirs gets turned down, or when a bad paper they disagree with gets published"
This basically says it all. No one ever submits a bad paper, in their own opinion, and bad papers are limited to the ones that they disagree with.
In case you missed it, peer review is dead. Get over it. Don't be sad, you can still have your journals and pomp and circumstances. You can drag out the rotted corpse and parade around with your scientist friends. It just carries no weight with the public. We've
Re: (Score:3, Insightful)
In other words, when the prevailing scientific opinion goes against the prejudices of the "public" (read: bloggers with delusions of grandeur), they will loudly claim that "peer review doesn't work", without understanding what it entails, and what it's for.
A reviewer hasn't necessarily checked the paper for accuracy, or tried to reproduce the results. What they are supposed to do is to ensure that the paper isn't unmitigated trash, and that the person who wrote it at least understands something about the su
Pity the internet is full of cranks (Score:3, Insightful)
That said, peer review provides substantially the same benefit as those "shoplifters will be prosecuted" signs you see in department stores.
Shoplifters are very seldom if ever actually prosecuted - but the threat, even the vaguest menace - of public scrutiny has an impact on behavior. I'm not talking about scientific fraud (which peer review will seldom catch,) but about quality of reasoning, doing the needed controls, etc. We may have a system that rewards good research little-better than an unbiased coin, but the <b>perception</b> that it works, or that it might work for you, motivates people to do the work needed to survive peer-review.
The issue applies to more than papers (Score:3, Interesting)
The government uses peer review to evaluate proposals for science and engineering grants. The same issues probably apply to those evaluations.
I have experienced a situation in which one reviewer recommended turning down a grant for reasons that could be considered as biased, although the bias was groupthink rather than individual. The other reviewers were enthusiastic about funding the grant and regarded it as a potential game-changer. It didn't get funded. A few years later the game-changing nature of the technology was recognized, but it was too late for the original applicant.
They are just as human (Score:2)
the problems of nepotism and tribalism are everywhere.
from internet message boards and professional office environments to national government and international politics.
here's a paradox for you,
someone could do a study on how to eleminate nepotism and tribalism.
then they can put it up for peer review.
I call shenanigans on this one (Score:5, Informative)
I have several publications that were significantly improved through the peer review process. When I review papers my goal is not to shoot down the work, rather I try find ways to improve it. Of course there are 'bad' reviewers, who think that reviewing a paper is shredding it to pieces. These are actually easy to spot, because they rarely suggest anything useful and are often ignored by the journal editors. Speaking of which, journal editors are yet another part of the peer review process that is missing from their model
Re:Just like the Slashdot moderation system (Score:5, Interesting)
Re:Just like the Slashdot moderation system (Score:4, Insightful)
Already because of this, no time for the scientist would be saved. A Google moderation system would have two problems. First, it wouldn't save any time because you still have to have some person doing the reviewing, and secondly you have to have someone qualified doing the reviewing whom you can trust to some extent to review in confidence, for otherwise if there are certain major problems with the paper but a few good ideas, they can be "stolen" by others, which may become a problem.
Re: (Score:2)
Re: (Score:3, Informative)
A few things:
1. Where is this money to hire 10,000 technical writers going to come from when, excluding tenured faculty, all academic scientists are threatened with losing their jobs due to lack of funds at least one
Re: (Score:2)
Someone else might just pick up his idea, fix the problems, and publish it himself.
I don't see the problem. The first publication will always have the earliest time stamp. If people really try to fudge time-stamps, we could check the time-stamps in Google's cache and see who is lying, or in the worst case resort to some formal time-stamping system for publications. If scientist A presents an idea, but it contains a technical flaw, and scientist B fixes it, then everything is working perfectly. Let the scientific community decide which of them deserves acclaim. Scientist A is still free to
Re:just like /.? (Score:5, Insightful)
I also think that the community goes a long way towards making the system work well. Sure there will always be people who abuse the system and moderate posts with which they disagree as flamebait, etc. but the community as a whole does a good job of promoting interesting lines of conversation and for any given topic there are probably a few people in the community who specialize in that area and can provide some excellent commentary.
It's not perfect, but it's probably one of the best systems in actual practice that's currently being used.
Re:just like /.? (Score:5, Insightful)
I think it comes down to two things.
First, the relative rarity of mod points encourages people to take it seriously. It also encourages the most active, most interesting posters to give up posting once in a while to moderate. Most other sites use a mod system that allows so many votes per article but still don't allow you to post and mod the same article. That means that the most frequent posters will seldom mod and that there can be people who only ever mod articles without ever commenting on them.
Second, attaching a reason to the mods encourages people to actually think about why they are modding the way that they are. As many people say, there is no "-1 I Disagree" mod, in order to mod someone down you have to be saying that they are actively trying to derail the conversation. Of course, lots of modders will ignore that and mod however they want, but I think that it does make at least some people stop and think before they accuse someone else of being flamebait or a troll.
Re:just like /.? (Score:5, Interesting)
I've long thought there should be a "-1, Disagree" option in the drop-down box that takes a mod point but has no effect.
Re:just like /.? (Score:4, Interesting)
But the proper way to communicate disagreement should be to respond with a reasoned counterargument, the goal being to show the justification for your disagreement and allow future readers the benefit of seeing your reasoning.
A "-1 Disagree" mod is anonymous censorship at it's worst. It adds nothing to the discourse. If all that you can add to the discussion is "I Disagree", then you can't add anything of merit to the discussion.
Re: (Score:2)
That's the point of "has no effect"--- the disagree mod wouldn't actually reduce a comment's score, it would just waste a mod point of the person who tried to use it, thereby removing mod points from people who shouldn't have them.
Re: (Score:3, Interesting)
Sometimes after you've already used a couple mod points in a thread, you come across something that's +5 Insightful, yet you know is maliciously incorrect. People are replying to it as if the person was correct (they're +5 insightful must be right).
What do you do? Some people get mod points more than others... if someone only gets five points once every other month or so, at best, are they going to throw them all away by commenting in the thread? What chance does their comment even have of getting read, sin
Re: (Score:3, Funny)
Also in case you mods haven't noticed, making me invisible doesn't work. I just repost the exact-same thing tomorrow. I refuse to be censored
I think Slashdot's system has the exact same flaw as Web Of Trust (WOT) which blocks foxnews.com on my browser. The system is abused by people pushing an agenda, with the intent of censoring what they don't like. It really needs to be abolished since it serves to SUPPRESS ideas rather than encouraging the sharing of thoughts.
Re: (Score:3, Informative)
Note also that ideas don't tend to be repeatedly suppressed unless they are truly out-there radical. I sometimes see posts promoting things th
Re: (Score:2)
>>>it actually tends to be one of the better systems I've seen on the net
As compared to what? I think it's junk. It has exactly the same flaws as described in the article, and is basically just "+1 I Agree" or "-1 I Disagree". The same purpose could be achieved via posting.
Of course I grew-up in age of Fidonet and Usenet, where pretty much anything was allowed. I prefer that non-censorship, even if it means you sometimes encounter a conspiracy nut (who can easily be ignored).
Re: (Score:2)
Basically, yeah.
It falls prey to human nature.
Re:Climategate for example (Score:5, Informative)
Broken record time, but yes. Such subversion of the peer review process did show up. The culprits weren't the ones you expect. [csicop.org]
In general however, I think that this study is rather pessimistic. And anyway, it hasn't been peer reviewed, so who knows... ;-)
(yes, I did read TFA, but not the paper
Re: (Score:2)
The short version of your linked article, for those who don't want to read:
1. Not peer-reviewed => generally not credible science
2. Peer-reviewed => not necessarily credible science
Of course, any scientist already knows this. It's the mainstream press that loves to take crackpot claims at face value, because it's sensational.
Re: (Score:3, Insightful)
Small correction if I may: Peer reviewed => not necessarily credible science, but you never ever base conclusions on only one paper.
In most cases, a good rebuttal or three will counter the effects of any completely wacko paper that slipped through. In the case I cited, not only were there rebuttals, many of the other editors involved resigned from the journal because they felt that it was a wrong that that paper got published. I think the (larger) system more or less corrected itself there, at least unti
Re: (Score:2)
Good point, though for anything serious, I'd think that you need a way of verifying a given reviewer's credentials. Just imagine if a young-Earth big-C Creationist kept trolling on a paper discussing (say) the selective advantage of different colouring in a desert lizard...
Re: (Score:3, Informative)
And it would also help the readers understand the article, a good referee report is quite illuminating. However, this has already been tried out by Nature in 2006
http://www.nature.com/nature/peerreview/ [nature.com]
and didn't work so well. Apparently, scientists are somewhat reluctant to openly criticise each other's work. But there's PLoS ONE that is alive and well, giving us some hope.
Michael Nielsen has a fine essay about this in his blog:
http://michaelnielsen.org/blog/the-future-of-science-2/ [michaelnielsen.org]
Re: (Score:2)
Do your own peer review... toss a coin and decide if it passes muster.
Curse (Score:2)
Re: (Score:2)
Maybe this is because your particular field is new to peer-review, and hasn't developed a healthy culture around it yet? Not saying that this is the case, but it does usually take some time for a system to become established properly, and for the bad apples to be culled...