Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science News Technology

Peer Review Ring Broken - 60 Articles Retracted 178

blackbeak (1227080) writes The Washington Post reports that the Journal of Vibration and Control's review system was hijacked by a ring of reviewers. 60 articles have been retracted as a result. "After a 14-month investigation, JVC determined the ring involved “aliases” and fake e-mail addresses of reviewers — up to 130 of them — in an apparently successful effort to get friendly reviews of submissions and as many articles published as possible by Chen and his friends.'On at least one occasion, the author Peter Chen reviewed his own paper under one of the aliases he created,' according to the SAGE announcement."
This discussion has been archived. No new comments can be posted.

Peer Review Ring Broken - 60 Articles Retracted

Comments Filter:
  • by KibibyteBrain ( 1455987 ) on Thursday July 10, 2014 @11:26PM (#47429253)
    Peter holds a very high standard for himself, I'm sure.
    • by Cryacin ( 657549 ) on Thursday July 10, 2014 @11:39PM (#47429291)
      It's just the new strategy employed to increase the speed of scientific research and development. It's called the self-peer-review.

      Amazingly articles can get released on the same day as submission with this method.
      • Besides, who's more peer with respect to the author than the author itself? I tell you, it can't get more peery than this.

    • Re:The Good News? (Score:5, Insightful)

      by m00sh ( 2538182 ) on Friday July 11, 2014 @08:26AM (#47430631)

      Peter holds a very high standard for himself, I'm sure.

      The standard practice is to form a unspoken agreement between several reviewers that they will all favorably review each others papers.

      Peter couldn't find his circle and created a self-circle.

      • Peter holds a very high standard for himself, I'm sure.

        The standard practice is to form a unspoken agreement between several reviewers that they will all favorably review each others papers.

        Peter couldn't find his circle and created a self-circle.

        Otherwise known as a circle jerk.

  • by Crashmarik ( 635988 ) on Thursday July 10, 2014 @11:29PM (#47429261)

    We live in a day and age where you can make a pretty decent living as a scientist without actually advancing science, or doing very much technologically related labor, only natural people would game the system. While science should be immune to this sort of thing, just how many unimportant not particularly interesting results do people actually try to reproduce ?

    • by Anonymous Coward on Thursday July 10, 2014 @11:40PM (#47429297)

      You have it backwards. the fault is not that not every scientist has a breakthrough.

      the fault is that in academia its pretty much "publish or die". The incentive to publish over anything else pushes the unscrupulous to do things like this.

      the system itself creates this sort of situation.

      • Re: (Score:2, Insightful)

        by Karmashock ( 2415832 )

        Wrong. The issue is that publishing is considered sufficient.

        It should be publish or die. How do you know they're doing anything if they don't publish? they could be watching tv all day for all you know otherwise.

        But as is made clear here, simply publishing and getting it through peer review is clearly not good enough. We need to increase what they have to do to avoid this situation.

        For example... maybe one scientist pays another scientist to reproduce his work.

        Maybe you have big collections of graduate stu

        • It should be publish or die. How do you know they're doing anything if they don't publish?

          Dude, seriously? Look up Hendrik Schön; he published... a LOT.

        • We need to increase what they have to do to avoid this situation.

          Alternatively (or in addition), we could increase the penalties for those caught cheating.

          • Re: (Score:2, Insightful)

            by TapeCutter ( 624760 )

            we could increase the penalties for those caught cheating

            No thanks, keep the lawyers out of it unless a genuine crime has been commited, the last thing we want is politicians regulating peer-review. There is no system that is totally incorruptable, the fact that these frauds were exposed means the system is working in this case. The fact that the scientific and academic communities will ostrasize the fauds for the rest of their lives is natural justice, anything more crosses the line between natural justice and revenge

            • by Karmashock ( 2415832 ) on Friday July 11, 2014 @01:06AM (#47429507)

              If you pay scientists to do science and they are contracted to do it... they fraudulently do not do science yet continue to cash your checks... that is a crime.

              • I'd argue that a lot of scientists are actually paid to bring in grants, teach, and publish. It's just a lucky coinicidence that doing science is helpful to at least two of those goals.
          • Alternatively (or in addition), we could increase the penalties for those caught cheating.

            FYI, cheating like this is already a guaranteed career-ender. People who do things like this aren't rationally weighing the cost of getting caught against the career advancement that comes from publishing; they simply don't expect to get caught.

        • by antifoidulus ( 807088 ) on Friday July 11, 2014 @12:57AM (#47429485) Homepage Journal
          No, "publish or perish" really dis-incentivizes novel research because guess what, often times really novel research fails. All "publish or perish" really does is incentivize either cheating or the lowest risk research imaginable. There are other mechanisms for making sure a researcher is actually doing their work, punishing them for taking risks shouldn't be among them.
          • No, "publish or perish" really dis-incentivizes novel research because guess what, often times really novel research fails. All "publish or perish" really does is incentivize either cheating or the lowest risk research imaginable. There are other mechanisms for making sure a researcher is actually doing their work, punishing them for taking risks shouldn't be among them.

            If novel research is failing peer-review, I don't see that not publishing is a good answer to that. A convenient one, no doubt.

            • by Anonymous Coward on Friday July 11, 2014 @02:06AM (#47429623)

              Scientifically useful negative results don't merely fail peer review, they are simply unpublishable in a major journal.

            • by AK Marc ( 707885 )
              If I proved that cats claws grow proportionally to zinc in their system (assuming otherwise healthy), up to zinc overdoses, it may be valid, presumably interesting, but I doubt that it would be picked up by a "major" paper. But if I proved that human IQ was proportional to the temperature of the blankets one slept under, that would be published many places and gain me much fame. Even if the results were faked and peers paid off.

              The solution would seem to be the university publishing all the unpublished a
            • by alvinrod ( 889928 ) on Friday July 11, 2014 @02:50AM (#47429705)
              It's not a matter of failing peer review, it's a general disinterest in publishing negative results. If you find a cure for cancer it's a big deal, but if you just found one more thing that doesn't work any better than a sugar pill, none of the journals are going to care about publishing it even if it's the most well-run study in the history of the world.

              If someone starts doing some novel research that's going to take five years to possibly produce results and nothing pans out, they aren't going to get anyone to publish the findings.
              • by Rich0 ( 548339 ) on Friday July 11, 2014 @04:42AM (#47429921) Homepage

                And this is part of why all the drug development work ends up happening in private industry.

                A scientist will come up with a molecule that inhibits some enzyme and get some publishable result. At that point they issue the typical "possible cure for cancer" press release and move on to the next thing. 5 years and $10M later a pharma company figures out that it causes heart valve degeneration or that inhibiting the enzyme isn't the magic bullet everybody hoped for. They don't bother publishing it, but none of their scientists get paid by the publication anyway. The companies interest is that if it eventually works out they make billions.

                So, in that sense you actually have an example of a way in which industrial research is actually less risk-averse than academia, which should be shocking.

                That said, when it comes to the basic research side of things pharma companies do tend to let the academics do the work for them.

              • That's true for large clinical trials, but clinical trials aren't all or even most research. For basic research like what this journal seems to publish on, no. It's rare that you'd have an experiment which would take months, let alone 5 years, and it would only be at the very end that you'd get a yes or a no.

                For example, a study in that journal is entitled "Predicting blast-induced ground vibration using general regression neural network." The abstract is

                Blasting is still an economical and viable method for rock excavation in mining and civil works projects. Ground vibration generated due to blasting is an undesirable phenomenon which is harmful for the nearby inhabitants and dwellings and should be prevented. In this study, an attempt has been made to predict the blast-induced ground vibration and frequency by incorporating rock properties, blast design and explosive parameters using the general regression neural network (GRNN) technique. To validate this methodology, the predictions obtained were compared with those obtained using the artificial neural network (ANN) model as well as by multivariate regression analysis (MVRA). Among all the methods, GRNN provides excellent predictions with a high degree of correlation.

                Emphasis mine: they're testing if they can pr

            • Re: (Score:3, Interesting)

              by Anonymous Coward

              I think what the poster you quoted wanted to say is that often to make major contributions you have to do something that has never been done before, and not just follow up on previous research. Pushing on current trends is not difficult, at all, and is basically guaranteed to get you a publication in a decent journal. A lab head can do several dozens of these papers a year if he has a few handfuls of people in his group and decides to have his focus on this. Now doing this more than guarantees a comfortable

              • Re: (Score:3, Informative)

                by Anonymous Coward

                I forgot to add this recent article [scientificamerican.com] to my post. It goes to show that the problems I am talking about are not just my personal anecdotes or limited to my field.

          • The problem there, then, is that research papers which analyse a failure aren't accepted often enough, which probably leads to other people redundantly repeating the same fruitless efforts. Failures aren't as flashy, but they're surely still useful.
          • I think I answered this point in this post:

            http://slashdot.org/comments.p... [slashdot.org]

            In summary, my point is not a defense of any specific method of auditing work and ensuring people aren't just screwing around.

            Rather, my point is a defense of auditing in general.

            If you don't like publish or perish then please suggest an alternative that doesn't just let scientists wake up at the crack of 4pm, drink until they pass out, and then do the same tomorrow.

            I'm not saying they would do that or they are doing it... I'm sayin

          • by mwvdlee ( 775178 )

            Which is why there is a growing movement of scientists who promote publishing failed research results.
            Scientists, out of anybody, should know that failure is when you learn the most.

          • You make good points. See also: http://www.its.caltech.edu/~dg... [caltech.edu]
            "The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists. There is little doubt that the perpetrators in these cases felt themselves under intense pressure to compete for scarce resources, even by cheating if necessary. As the pressure increases, this kind of dishonesty is almost sure to become more common.
            Other kinds of dishonesty will also

        • by guises ( 2423402 )
          The problem with "publish or perish" isn't the fact that scientists have to eventually share their results, it's the volume of publishing that's expected which gets in the way of actual work. When a scientist has a data set and the first thought is "How many papers can I get out of this?" it's an indication that something is wrong.
          • In the UK, university research departments are assessed base on the Research Excellence Framework (REF, formerly the Research Assessment Exercise [RAE]). Each faculty member is required to submit 4 things demonstrating impact. These are typically top-tier conference or journal papers, but can also be artefacts or examples of successful technology transfer. The exercise happens every four years, so to get the top ranking you need to write one good paper a year. The only incentive for publishing in second
        • It should be publish or die (...)

          You might want to read this:
          http://www.theguardian.com/sci... [theguardian.com]

          • I read it. Do you care what I thought about it or did you just want me to read it?

            • by narcc ( 412956 )

              I presume he expected you to gain some insight from the article. It's a shame he didn't know that your beliefs are unshakable.

        • For example... maybe one scientist pays another scientist to reproduce his work. Maybe you have big collections of graduate students that as part of their process of getting a degree get assigned some random papers submitted by scientists in their field and they have to reproduce the work.

          You don't work in science do you? Being paid for reproducing someone else's work means you are not producing anything original of your own. It doesn't advance your career. Then with respect to your second point, being a graduate student means to perform original research. If your PhD is about reproducing someone else's work, you won't be able to publish anything of significance.

          The problem is the system globally: journals, which push for high impact sexy stories; promotion committees, which only look at how

        • But as is made clear here, simply publishing and getting it through peer review is clearly not good enough.

          I thought they weren't good enough anyway, that a paper in a peer-reviewed journal didn't prove anything by itself. There's been plenty that turned out to be wrong.

          After the publication, people are likely to want to build on that work (if it's interesting), and they'll wind up replicating parts of it. If it isn't solid, for whatever reasons, it'll get found then. If it holds up, there will soon

      • I'll just leave this here... :)

        I <3 Tom Lehrer.

        https://www.youtube.com/watch?... [youtube.com]

    • I've never perceived scientists as all that well-paid, given their education and the amount of work that they appear to be put in. If somebody is smart enough to become a scientist, there's got to be more lucrative things they could do.

  • by RevWaldo ( 1186281 ) on Thursday July 10, 2014 @11:36PM (#47429279)
    That was one high class bondage mag, right up there with Bizzare and Exotique.

    I don't think "peer review" means what WaPo thinks it means...

    .
  • There actually is a Journal of Vibration and Control. Must be some thrilling stuff to read.
  • by Anonymous Coward

    Yay!

  • Chen-Yuan Chen (Score:2, Interesting)

    by Anonymous Coward

    There's a lot of weirdness about this story. Firstly, guy's name is Chen-Yuan Chen, not "Peter". Secondly, he works at a teachers' college. Thirdly, he's supposed to be a researcher in methods for using electronics to help people learn, so why would he suddenly start writing a bunch of papers about mechanical systems? In addition to spamming 60 fraudulent papers in a few years, he also had each of the 60 papers cite all the other papers!

    And the weirdest thing is that a bunch of right-wing crackpots are comi

    • by AK Marc ( 707885 )
      My first job was for a guy named Marshall Marshall. So it's not only the Chinese that re-use names. That, and it's likely that the two Chen's aren't the same word, but phonetically similar.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      What's also weird is none of the "official" artifacts used the name "Peter" anywhere (always Chen, C-Y).
      Where did "Peter" so prominently paraded by the lynch mob come from?
      If anything, any official release should have used the name that's splashed all over offending articles listed.

      Unrelated problem is, you can't get anything published in the trashiest tabloids without a figurative full cavity search.
      JVC is suppose to be a proper academic journal, but they seem to have nobody with bullshit radar on the edit

  • Web of Trust (Score:4, Interesting)

    by Dr_Barnowl ( 709838 ) on Friday July 11, 2014 @04:14AM (#47429863)

    People should cryptographically sign peer reviews (and their papers). And journals should only trust signing keys that themselves have been signed by respected experts. The more respected you get, the more signatures your keys and papers get.

    • by Nonac ( 132029 )

      That sounds more like a popularity contest than a peer review system.

    • Actually, that is a good idea. BUT, you have to have vetted signers.
    • I will say that I have long thought that USPO should start offering vetted keys. Just as they do passports, they would be ideal for doing state IDs and vetted keys. If more govs. offered up such things, then it would make it possible for publications and other groups to require a truly vetted key.
  • There are whole fields within Computer Science, one being "Method Engineering", that basically are one big ring. For your information, "Method Engineering" is about methods for developing software.
  • No, wait - that's a different peer review ring.

  • Seriously, we read many stories here in which big deals are made of them, but as soon as I check that it has lead by Chinese Academicians (even if they are now working in the USA), I discount it. WHy? Because over and over, I see fraud in the publications, and here, I notice that many of these stories are being pushed by ACs. In a nutshell, these ppl are putting together fraudulent publications (generally, leaving out the negatives that they came across), and then marketing them to make themselves look goo
  • If only the journals could run some kind of check to determine if "peers" are who they claim to be.... and only them.

  • Not surprising (Score:5, Interesting)

    by teakillsnoopy ( 516514 ) on Friday July 11, 2014 @09:54AM (#47431279)
    I've been proofreading engineering/medical papers for universities in Taiwan for over 7 years and this is not surprising in the least. There is almost no stigma regarding plagiarism in this region (I've done work for Malaysian, Vietnamese, Indonesian, etc. authors). When I alert an author about copy/pasted text, their reaction is one you would get if you told someone that their reference format needs to be change. "Oh, ok. I guess I'll change it.". The universities here never seriously investigate plagiarism because all the big fish at the top did it themselves to get to the top.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...