Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Does Journal Peer Review Miss Best and Brightest? 139

sciencehabit writes: A study published today indicates that the scientific peer review system does a reasonable job of predicting the eventual interest in most papers, but it may fail when it comes to identifying really game-changing research. Papers that were accepted outright by one of the three elite journals tended to garner more citations than papers that were rejected and then published elsewhere (abstract). And papers that were rejected went on to receive fewer citations than papers that were approved by an editor. But there is a serious chink in the armor: All 14 of the most highly cited papers in the study were rejected by the three elite journals, and 12 of those were bounced before they could reach peer review. The finding suggests that unconventional research that falls outside the established lines of thought may be more prone to rejection from top journals.
This discussion has been archived. No new comments can be posted.

Does Journal Peer Review Miss Best and Brightest?

Comments Filter:
  • Go science!
  • by The New Guy 2.0 ( 3497907 ) on Monday December 22, 2014 @08:32PM (#48656937)

    For those of you just joining us, peers given mod points hand them out to review comments posted here. CmdrTaco's site has seen a lot of controversies with this system in the earlier days, and M2 was invented to review the mod point decisions. Lots of discussion has been sorted by this system, and the crap found on other servers has been eliminated.

    • by omfgnosis ( 963606 ) on Monday December 22, 2014 @11:44PM (#48657815)

      And some of the most interesting, insightful, informative and funny comments go on forever underrated, and mostly unseen. This is caused in part by an understandable but evidently imperfect bias in Slashdot's design, where incumbent posts (posted earliest, posted by trusted users) are given greater visibility.

      Like peer review in journals, it is possible that mostly-positive solutions can have negative consequences as well.

      • Considering the papers were eventually published anyway, it's hard to see how there's a huge problem here.
        "Impactful papers don't always get in biggest journals" isn't the same as "peer review has a problem."
        • Survivorship bias! (Score:4, Interesting)

          by mha ( 1305 ) on Tuesday December 23, 2014 @04:32AM (#48658673) Homepage

          > Considering the papers were eventually published anyway

          That seems to be a clear cut case of biased sampling. What you heard about is all there is?

          http://en.wikipedia.org/wiki/S... [wikipedia.org]

          • Yes. If only someone did a study to consider how many good papers didn't get published. This study didn't do it.

            My hypothesis is the answer is close to zero. There are too many journals that will accept close to anything.
        • by Anonymous Coward

          It is impotant to get papers through to good journals for several reasons. Many, or dare I say most, senior scientists (and those they train) still use antiquated systems for getting their information: Rather than using an RSS reader, say, they mostly rely on hearing about new stuff through conferences and networking, or by reading the few (top) journals of their field. Eventually the information may come out, but by that time someone else might already have independently come up with the same idea and mana

          • "Rather than using an RSS reader,"

            You are a fucking moron. Seriously your comment deserves nothing more than derision if you think for a moment researchers need an RSS reader for research topics.

            Wow. Just, wow.
            • by Anonymous Coward

              What an insightful post with a whole lot to give in terms of alternative suggestions. In the modern age you're supposed to be able to follow roughly a 100 different journals, each averaging a dozen or so publications a week. There is absolutely no way you can manage the information load without tools of the information age. Surely you are not actually suggesting that just going to conferences is enough or that you would even know of a fraction of what is going on without the help of a computer.

              Maybe you wer

          • Reading through a lot of papers quickly at a time? That's what abstracts are for. Problem solved before you were born.
        • And how many great ideas did not get to see further publication? This is selection bias, as well as absence of evidence bias, among other things.
          • And how many great ideas did not get to see further publication?

            Yes, please answer that question. How many?
            My hypothesis is the answer is close to zero. There are too many journals that will accept close to anything.

      • If you don't get your hilarious comment in by the first 75 or so, forget it. No mod points for you. First 20-30 better yet.

      • There used to be better sort options here, like "5s first" combined with "Most Recent first" so you could see what's being popular and respond to that.

        • That would basically exaggerate the bias toward incumbent posts.

          • You start as 0 here. Registering a username with an e-mail address promotes you to a 1. Getting enough positive mod points with negative ones (that survive M2) subtracted gets you a 2.

            Yep, there is a bias to having been here before, but it can be overcome.

            • Okay. Pretty sure I made my point. What's yours?

              • Incumbent posts deserve a bias in favor of them... it's how Slashdot works.

                • Do they, necessarily? I've suggested that it is understandable but evidently imperfect—the evidence being the consequence that good non-incumbent content is often unseen. In fact, Slashdot's entire moderation approach reinforces that consequence.

                  I'm not saying Slashdot should abandon its approach, I'm not even saying that it doesn't work somewhat well. But it has a cost and that cost is much higher for a large community with a large volume of user-generated content. And I think that is worth discussin

      • Game shows call this a "champion's advantage" that trust the player who has been there before more than the challengers, such as the ability to move first, or even an buzzer system that gives them control to ring in later yet still be the first recognized to answer.

        Here on Slashdot, it's "You already gave us enough good stuff, you start at 2" or "You paid, so here's the story before it's open for comments so you have time to prepare yours."

    • by s.petry ( 762400 )

      I agree with you, but will point out that TFA is discussing something else. Science is not about having a majority understand, or be "interested". This means that real science tends to be ignored. Sometimes real science goes against the grain, and that science is never peer reviewed either. Meaning that the same status quo is maintained even if it's not scientific.

      Slashdot's biggest topics are not "Science" but politics. This is an area which is extremely complex and there may not be a definitive answe

  • by theshowmecanuck ( 703852 ) on Monday December 22, 2014 @08:37PM (#48656975) Journal
    We cannot solve our problems with the same thinking we used when we created them.
    -- Albert Einstein

    Full disclosure: I found this at this web site [brainyquote.com].
    • Yep, science that's kept to one person or a small group doesn't accomplish much. That's why the innovators must meet somewhere or somehow.

    • Unfortunately, your source contains well-known misquotes and totally faked quotes from Albert Einstein. http://skepticaesoterica.com/d... [skepticaesoterica.com] and http://en.wikiquote.org/wiki/A... [wikiquote.org]
  • by toQDuj ( 806112 ) on Monday December 22, 2014 @08:41PM (#48656991) Homepage Journal

    While I don't disagree with the conclusions, this summary equates paper "quality" with number of citations. High numbers of citations do not mean high quality, and is very field-dependent.
    Quality can only be assessed by people reading the paper.

    • by drooling-dog ( 189103 ) on Monday December 22, 2014 @11:56PM (#48657851)

      Also, a lot of highly-cited papers are methodological in nature, and the "Big 3" tend not to publish many of those.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      >While I don't disagree with the conclusions, this summary equates paper "quality" with number of citations.

      No, they don't "equate". They provide a metric and this metric seems reasonable. Black or white is not applicable here. We are talking about high probability of high quality. Does not work each time. The metric will sometimes miss some good paper but high cited paper are important papers.

      >High numbers of citations do not mean high quality, and is very field-dependent.
      >Quality can only be as

      • Citations are a terrible way of measuring paper quality. One of the most recent citations of a paper of mine was from some guys I know at MIT, who basically said 'and this is exactly the wrong way of doing it'. A lot of the things we cite with the biggest citation counts are benchmark suites. There's a reason that the REF[1] explicitly didn't include bibliometrics when evaluating impact (at least in computer science, not sure about other fields).

        [1] The 'Research Excellence Framework', which assesses an

    • You're on to something here. This is actually one of several things that seem to be wrong in the system. The metrics seem to have been devised by personnel demons ('scuse me, They are HR now :).
      1. Science is conservative. The 'higher' you get, the more conservative you have to be.
      2. Scientists in many fields do not read enough papers. They don't have the time. They grab abstracts and conclusions and read a section or two. They also might read a paper to contradict it destructively. Look at the evolution/I
  • by Anonymous Coward on Monday December 22, 2014 @08:48PM (#48657031)

    People tend to cite papers from higher ranked journals more. In addition, said journals are higher ranked by search engines so the virtuous cycle continues. Thus this result is bogus.

    The right way to do this study is to do a controlled study. Fortunately, this has been done in the recent NIPS conference (note that in CS conferences are more important than journals). See http://mrtz.org/blog/the-nips-experiment/ for details. Essentially, the noise is *huge*.

    An excerpt:
    "Relative to what people expected, 57% is actually closer to a purely random committee, which would only disagree on 77.5% of the accepted papers on average:"

    • People tend to cite papers from higher ranked journals more. In addition, said journals are higher ranked by search engines so the virtuous cycle continues. Thus this result is bogus.

      It's not totally bogus: have you seen the drek that gets published in the *really* low impact journals?

      The right way to do this study is to do a controlled study. Fortunately, this has been done in the recent NIPS conference (note that in CS conferences are more important than journals). See http://mrtz.org/blog/the-nips-... [mrtz.org] fo

  • by Anonymous Coward on Monday December 22, 2014 @08:52PM (#48657047)

    University level:
    I Just finished my PhD at Imperial College, a world leading institute, and all they care about is incremental research that industry will fund. New ideas just get thrown away until another university does the ground work and the IC jumps in with bigger wallets and then takes it on/steals it.

    UK Government funding:
    If you have a new or interesting approach forget about getting grant funding, you only get money in the UK if the work has already been proven to be successful. Quite literally the funding peer review of your grant can be rejected because you don't have the end answer (with a high degree of certainty) the research would give.

    Original ideology of peer review is great, what is being practiced today (at least in the UK) is broken.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      If you have a new or interesting approach forget about getting grant funding, you only get money in the UK if the work has already been proven to be successful.

      There's a lot of scientific work that needs doing that requires high levels of scientific training, creativity and motivation but that isn't groundbreaking science. This scientific work can be anything from preparing scientific animations for educational use to working as a genome sequencing technician.

      Maybe I'm just too cynical but, from what I've seen, it's essentially impossible to get grant funding for truly innovative and groundbreaking science. But there's a lot of outside-the-box innovative research

    • by Mr.CRC ( 2330444 )

      It's broken in the USA too. Just look at the fact that they told us to eat low fat and high carbs for decades, until we're all obese and dying of diabetes.

      So what funding incremental research seems to do, is create a high risk that if initial research results are flawed in subtle ways, the error might compound for a very long time before eventually getting corrected. The probability of this is of course amplified if entrenched interests develop as a consequence of the research, such as when governments

      • by Anonymous Coward

        Of course, the reason why you KNOW science is broken is because the science doesn't show AGW being wrong...

    • by gweihir ( 88907 )

      Matches my experience. The bean-counters manage to squash all original research, unless you can somehow finesse a bit into grants that are for nothing more than industrial development work. No surprise many fields, like CS for example, have completely stalled on the research front.

    • If you have a new or interesting approach forget about getting grant funding, you only get money in the UK if the work has already been proven to be successful

      While that's more or less true, it's worth noting that EPSRC doesn't require any accountability on the spent funds, they just use previous research output when judging the next application. That means that if you want to do something groundbreaking then you can apply for a grant to do something a bit incremental (ideally something that you've mostly done but not published), then use the money to do the groundbreaking thing. Then, when you apply for the next grant, the groundbreaking things that you want t

      • While that's more or less true, it's worth noting that EPSRC

        Yeah but the EPSRC are literally clowns. I went to that shithole known as Swindon (seriously, the level of shitholiness is almost impossible believe until you go) and everyone in the EPSRC building had red noses, floppy shoes and funny wigs. That annoying oot-toot-doodle-ooodle music was playing continuously over the PA system. And I saw some people arrive by car. There must have been about 12 grant administrators crammed into that teeny little car

    • by excelsior_gr ( 969383 ) on Tuesday December 23, 2014 @07:45AM (#48659209)

      This happens in Germany as well. When we applied for a government grant we had to present a detailed project plan and describe the "deliverables" in ridiculous detail. The people in the review committee weren't idiots, they knew that the plan was bullocks, but you had to include it anyway. Back then I attributed the whole thing to the german obsession with planning.

    • You are complaining about the funding models, not peer reviewed publishing.
      I agree on the funding models having problems. The issue there is that most funding is influenced by politics, which in turn is influenced by established industry powers.

  • by jklovanc ( 1603149 )

    This is yet another statistical study that itself has flaws.
    1.What was the selection process for the studies. The phrase "All 14 of the most highly cited papers in the study" implies that there were papers not in the study. Possible selection bias?
    2. They do not go into why the 14 papers were cited so much and if any further research or refinement of the papers were done before they were accepted by other journals. Surface analysis of numbers can be manipulated to say anything.
    3. They also say that it might

    • by Anonymous Coward

      1.What was the selection process for the studies. The phrase "All 14 of the most highly cited papers in the study" implies that there were papers not in the study. Possible selection bias?

      Are you joking? The study required getting extensive background on papers (ie, not just stuff that could be gleaned automagically from databases). Of course the study doesn't include every paper ever written. Do you also object to a clinical trial of a new cancer treatment by saying "But wait! Some cancer patients weren't in the trial!"?

      2. They do not go into why the 14 papers were cited so much and if any further research or refinement of the papers were done before they were accepted by other journals. Surface analysis of numbers can be manipulated to say anything.

      I would like to know if the authors explored this, but as you say, it is paywalled.

      3. They also say that it might be better to not have a review and just publish everything. This just means that everyone who reads the papers has to do the review. That is not practical. There are many papers that should not be published due to shoddy practices or malfeasance. Instead of trowing out the whole system how about looking at why the 14 papers were rejected and modifying the system accordingly.

      I see no reason to believe that they say that. The end of their abstract states:

      "Despite t

      • If you're going to complain about it being paywalled (and it is a valid complaint!)

        How hard is it to reject any story that links to a paywalled site? It's literally one click away.

        Hey, maybe we can suggest it to the editors by submitting a story titled "The One Click Trick to Better Stories"

    • 1.What was the selection process for the studies. The phrase "All 14 of the most highly cited papers in the study" implies that there were papers not in the study. Possible selection bias?

      Of course there were papers not in the study, they didn't look at every single paper ever submitted to a peer reviewed journal in all of human history. The paywall means I can't see if they explained how the 1008 got selected - well not can't, won't since my interest isn't so high as to fork over cash for it.

      2. They do not

      • my interest isn't so high as to fork over cash for it.

        If Slashdot users can't discuss the featured article meaningfully without having to buy something first, then the story should be considered a Slashvertisement, the product being copies of the featured article.

        • by Anonymous Coward

          This might be more compelling if anyone actually bothered reading the article in cases where it is available.

    • So build a wiki / forum -ish system where "accepted" papers are published. Anyone can submit a paper, or criticism, or a review, or a meta-review... With a collection of editors & "reviewers" try to keep the whole thing honest.

      In other words, a wikipedia that *only* accepts original research.

    • by gl4ss ( 559668 )

      3. is kind of moot.

      why? anyone can publish anything they want on the internet. if it's interesting and the results actually work, it WILL make an impact.

      I mean, take a look at youtube and the amounts of people trying to replicate shit that has been proven not to work! yet they try to make the free energy shit work again and again.

  • by fustakrakich ( 1673220 ) on Monday December 22, 2014 @08:55PM (#48657063) Journal

    I would say the finding confirms that unconventional research that falls outside the established lines of thought may be more prone to rejection from top journals.

    It's typical human politics and ideology at work. What would you expect from a large group of people, all with vested interests?

    • by Anonymous Coward

      I would say the finding confirms that unscientific bullshit may be more prone to rejection from top journals.

      It's the scientific method at work. What would you expect from a large group of people use rational thought?

      Fixed that for you.

      • Eh, you could be right, but past history shows that politics and economics disrupt all human endeavors, including those in science. The sharks are everywhere, and they smell money.

      • You are correct, except that you're wrong. Of what value is a system that rejects scientific BS, if 9 times out of 10 it also rejects true, important innovation?

        I don't think this is anything new, though. Einstein was known to occasionally send a radical new paper to someone with a note from himself attached saying idiot, look here.

    • "Unconventional methodology" is not.

      Papers that don't use sufficiently rigorous methods should be rejected, regardless of their conclusions - even if those conclusions eventually turn out to be right. It's the only way to have any confidence about the research. If the authors are so sure of their results, they should do them more carefully, and submit again.

      Far too often, rejections are taken as evidence of cronyism or groupthink (usually by those whose beliefs are contradicted by established science), when

      • Did I mention methodology? My criticism is of the sloppy stuff that gets through because of the clout behind it. Everything needs to be reevaluated, by a more diverse group of people with less interest and more clear headed logic. It wouldn't bother me if the whole lot was rejected and everybody started over.

      • by CBravo ( 35450 )
        I disagree.

        Your opinion here, because you did not provide proof, should be taken with a few grains of salt. People do all sorts of things that are perfectly valid without proof. Science is not only the stuff that can be proven without a doubt (philosophy as an extreme). How would science ever have evolved without mediocre proof that were later confirmed with strong proof?

        Now don't get me wrong. I like proof because it often gives insight and might reject other plausible explanations, etc. And there s
        • Philosophy != Science, but both have their place.

          There are plenty of avenues for creativity, discussion, unproved hypotheses etc, but peer-reviewed magazines are not one of them. That way, everyone can distinguish solid, confirmed results that can be relied upon, from unproved assertions or tentative conclusions that might be right - or might not.

          Scientists are free to follow hunches or interesting leads; nobody is stopping that. But there has to be a clear indicator of the reliability of information, and s

          • by CBravo ( 35450 )
            Scientists are not free to follow hunches if they are, in effect, not payed for. Hunches are a hobby in the NL. The effect is that mindnumb people do science here. I was good at hunches.

            Your argument about reliability has a place. One should know how reliable it is. But your conclusion that non-proven stuff has no place in the scientific process is invalid imo. Because the scientific process is limited to journals.

            Suppose our science is that 'we want to find a place to shop'. Some scientist went out
            • by CBravo ( 35450 )
              s/teach to/learn to/
            • your conclusion that non-proven stuff has no place in the scientific process is invalid imo

              Actually, I don't think that at all. Ideas, hypotheses, speculation, tentative results etc are all a crucial part of the process, and need to be shared and discussed between scientists. All I'm saying is that these should not be confused with solid, proven, reliable results, and that peer-reviewed journals are the best way we have of separating the two. Unproven hypotheses should be discussed in separate channels - conferences, forums, water coolers etc, or even journals too so long as they're clearly marke

      • Nice strawman. He doesn't mention or even allude to methodology.
  • by gweihir ( 88907 ) on Monday December 22, 2014 @09:25PM (#48657203)

    I frequently had papers rejected as "not new" without citation and then accepted elsewhere where they told me on request that they checked carefully and found the content was indeed new and interesting. My take is that this may also quite often be "reviewers" that are envious or trying to compete by putting others down, not only ones that are incompetent. Fortunately, I had nothing stolen by reviewers (after they rejected it) but I know people that had this happen to them. As the peer-review system is set up at this time, the best and the brightest need longer to get their things published, need much longer to get PhDs and have significantly lower chances at an academic career as a result. Instead those that publish a lot of shallow and/or meaningless incremental research get all the professorships, while barely qualifying as scientists. The most destructive result of this is that many fields have little or no real advancement going on as new ideas are actively squashed. After all, new ideas would show how abysmally bad at science the current position-holders are. But this is not new. Cranks like Newton had research by others squashed or suppressed, because it was better than his stuff. This is also the reason most scientific discoveries take about one research-career-length to go into actual use. The thing is that the perpetrators of the status-quo have to retire before their misconceptions and mediocre approaches can be replaced. Exceptionally stupid, but all too human.

    Personally, I am now in an interesting industrial job, and I still do the occasional paper as a hobby. But I would strongly advise anybody bright against trying for an academic career. Wanting to do research right reliably ensures that you will not be able to do an academic career at all. The core ingredients for a scientific career are mediocrity, absence of brilliance, hard work and a lot of political maneuvering. Oh, and you must not care about doing good science!

    • [Rejection due to reviewer envy] is also the reason most scientific discoveries take about one research-career-length to go into actual use.

      Are you sure it isn't because a patent takes "about one research-career-length" to expire?

    • I frequently had papers rejected as "not new" without citation and then accepted elsewhere where they told me on request that they checked carefully and found the content was indeed new and interesting

      When I've had this kind of rejection, it typically means that the paper is not well presented. Part of the reason that papers that are rejected one or more times before publication tend to be more widely cited is that the rejection and editing phase forces you to make your arguments in a clearer and more structured manner.

      • When I've had this kind of rejection, it typically means that the paper is not well presented.

        Not my experience. This stupid claim of "not new" has got so pernicious that some of the major computer vision conferences (e.g. ECCV, ICCV, CVPR, BMVC) will, I believe, now summarily discard a *review* that claims "not new" without also providing a citation as to where it's been done before.

        • by gweihir ( 88907 )

          Same happens in some networking fields. It is truly pathetic. You get the impression that there is a large core of really bad "researchers" that try to keep everybody with half a clue and intact creativity out in order to prevent that people see how incompetent they are.

      • by gweihir ( 88907 )

        Not my experience. And with up to 6 reviewers, you would think that at least one would comment on the presentation, if it was the problem. In two instances I even got a best-paper award with exactly the same paper when it was finally accepted somewhere else.

  • Papers that were accepted outright by one of the three elite journals tended to garner more citations than papers that were rejected and then published elsewhere

    Maybe they got cause and effect reversed even for those papers...

  • by Livius ( 318358 ) on Monday December 22, 2014 @09:54PM (#48657341)

    "unconventional... may be more prone to rejection"

    Isn't that the definition of 'unconventional'?

  • For a minute, I thought the headline said that someone who is a journal peer reviews someone nicknamed "Miss Best and Brightest".

  • If I understand correctly, very highly cited papers were not published in peer-reviewed journals. Where were they published then? How did they became famous?
  • It is just journals doing their work when they bounce unconventional research, asking for further proof or clarifications. There is a lot of unconventional research out there, and while some may be the beginning of a breakthrough, most of it is not. Just look at atomic fusion.

    Of course it makes it harder for really new and exciting things to get in the journals, it also keeps a lot of the crap out.

  • by Anonymous Coward

    I went to grad school with a guy who was researching how to stream video over TCP. This was... 15 years ago maybe? Every where he went, his papers got rejected because "you dont stream over TCP. You are supposed to use UDP. Everyone knows that." Everyone knows that. No criticism of his work besides "everyone knows that." That's not the scientific method, yet it was nearly impossible for him to publish.

    Fast-forward to 5 years ago and have a look at YouTube. Steaming video over TCP. The new hotness.

  • by bouldin ( 828821 ) on Monday December 22, 2014 @10:20PM (#48657477)
    Like many things in life, it's a popularity contest first, and a meritocracy second (at best).
  • Paywalls? (Score:4, Interesting)

    by Irate Engineer ( 2814313 ) on Monday December 22, 2014 @10:43PM (#48657597)

    Maybe those 14 articles were cited more because they weren't buried under the paywalls imposed by the three "elite" journals? Scientists could actually get their eyes on these articles without paying a steep subscription or per unit cost?

    Maybe the elite journals actually hinder the exchange of information and ideas that science needs to move forward? Nah! Can't be!

    • Most scientists work for scientific institutions (universities, research companies) where their employer has a license for all to read those journals. Just like the old university libraries where you could find all these journals in print. To these people there is not much of a hindrance and the digital availability may make exchange of ideas actually easier than it was before.

      They do however keep curious bystanders out - people who have an interest in science but are not working in the field. This are the

      • by Uecker ( 1842596 )

        Somehow these journals need to be paid for their work. Peer review is not free, publishing is not free. Just putting it all out on the Internet for free is not a viable business model, as is proven by the many pay-to-publish crap journals discussed here many times recently.

        While I agree with most other things you said, I think you got this completely wrong. Peer review is done by volunteers and publishing is relatively cheap (and the traditional scientific publishers make a lot of profit). You can easily operate a journal with very minor resources. And this is exactly the reason there are many pay-to-publish journals which are crap. It is just very cheap to set them up. But not all of these journals are crap (PLOS ONE is the most prominent example of a highly-ranked journals

  • The process of getting rejected from a journal may lead to improvements in a paper, or lead the paper to be submitted to a journal that's more tier-appropriate than one's first choice. Both can be very healthy.

    • Sometimes yes, more often no.

      I've certainly had papers improved by reviewers, but that has much more often happened after a "major revisions" response, rather than an outright reject. Much as I hated to admit it at the time, the work improved the paper.

      I've also had perfectly bad papers rejected for good reasons.

      I have had a *lot* of papers rejected for insane reasons. I had one person who simply refused to believe that M-estimators were robust statistics and insisted that they weren't robust because they f

  • This is what you would expect from sub-genius judging genius.
  • The finding suggests that unconventional research that falls outside the established lines of thought may be more prone to rejection from top journals

    That is the way it should be. It is not a bug, it is a feature. Extraordinary claims demand extraordinary evidence. The signal to noise ratio is very poor when it comes to unconventional research and findings. For every deserving paper made to jump through the hoops, there are 100 papers sent to the dust heap of history very deservedly.

    Think about it, Einstein was a patent office clerk. Srinivasan Ramanujam [wikipedia.org] was a clerk on Madras Port Trust. Eddington destroyed the Chandrashekar on the first internatio

    • by gtall ( 79522 )

      I think your conclusion is essentially correct. The problem is that spotting good science in realtime is hard work. The scientists reviewing the paper can only put it in the perspective of their experience. If it too far outside and they are too far inside, then the paper gets rejected. It frequently requires a fair amount of time to pass before the results of a paper can be properly analyzed, and that's if the paper hasn't been so buried that no one recalls it ever being written.

      To make matters worse, ther

  • That people take this article as gospel and don't question it's premise. When you are comparing the peer review process to the NBA, you have a problem.
  • And why would Journal Peer... review her? ;)

  • What they need is for reviewers that screw up and reject good stuff or accept bad stuff, to have their statistical weight reduced.

  • Research published in lesser read journals gets fewer citations than elite ones. If you are writing your own paper, you might want to drop a few big names rather than some obscure ones to gain credibility. The initial reason for the elite journals' editors rejection may have little to do with an in-depth analysis of the research. So it goes on to some lesser publication.

    From TFA:

    It's a sign that these editors making snap decisions really quickly still have a nose for what quality is and isn't

    That could be bad logic. The initial rejection by the editor of an elite journal may have little bearing on the quality of the re

  • by Anonymous Coward

    Yes peer review is indeed flawed.

    So what?

    The article even shows by the number of cites of articles rejected by top tier journals that it is self correcting.

  • Well, yes.

    When we build distributed systems, the need to setup a distributed consensus algorithm is appearing in front of us, time and again. Leslie Lamport (of LaTeX & Time-Clocks fame) came up with a novel algorithm during early 90s about to solve this is a very competitive way (Paxos is its name). Sadly, the algorithm remained shunned for a number of years, due to rejection via the very same channel in which it was eventually published many years later. If you realise the immediate practical impact o

  • ...is published in a journal (in)famous for its lack of a peer review process. Makes complete sense.

Decaffeinated coffee? Just Say No.

Working...