Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses Science

Hundreds of Researchers From Harvard, Yale and Stanford Were Published in Fake Academic Journals (vice.com) 81

In the so-called "post-truth era," science seems like one of the last bastions of objective knowledge, but what if science itself were to succumb to fake news? From a report: Over the past year, German journalist Svea Eckert and a small team of journalists went undercover to investigate a massive underground network of fake science journals and conferences. In the course of the investigation, which was chronicled in the documentary "Inside the Fake Science Factory," the team analyzed over 175,000 articles published in predatory journals and found hundreds of papers from academics at leading institutions, as well as substantial amounts of research pushed by pharmaceutical corporations, tobacco companies, and others. Last year, one fake science institution run by a Turkish family was estimated to have earned over $4 million in revenue through conferences and journals.

Eckert's story begins with the World Academy of Science, Engineering and Technology (WASET), an organization based in Turkey. At first glance, WASET seems to be a legitimate organization. Its website lists thousands of conferences around the world in pretty much every conceivable academic discipline, with dates scheduled all the way out to 2031. It has also published over ten thousand papers in an "open science, peer reviewed, interdisciplinary, monthly and fully referred [sic] international research journal" that covers everything from aerospace engineering to nutrition. To any scientist familiar with the peer review process, however, WASET's site has a number of red flags, such as spelling errors and the sheer scope of the disciplines it publishes.

This discussion has been archived. No new comments can be posted.

Hundreds of Researchers From Harvard, Yale and Stanford Were Published in Fake Academic Journals

Comments Filter:
  • As a scientist I run in to this all the time. Everyone would love to get all their work into Nature, Science, and Cell; but they know the reality is that very little gets published in those journals. Then they look in to other journals with lower impact factors and they have to weigh a lot of factors - including costs to publish and the expected length of time to get a publishing decision. Some journals aren't forthcoming with either of those, either.

    Then we see new open-access journals popping up with official sounding names all the time. They promise quick turn-around, low publication costs (sometimes even free), and their open-access setup is generally already compliant with NIH and NSF requirements. If all we want to do is get the manuscript out and move on, these can look very tempting.

    What's the answer then? I don't know. Nobody does. BioXiv (and others like it) offer an interesting possibility but that isn't without pitfalls (not the least of which is that a paper there that gets rejected in a journal is somewhat more difficult to resubmit elsewhere).
    • by sinij ( 911942 ) on Tuesday August 14, 2018 @10:38AM (#57123262)
      A dark secret is that even in peer reviewed journals that take peer review seriously you have bad papers. Bad, as in someone took unjustifiable liberties with methods and/or data to show correlation. It is nearly impossible to catch this in most fields, as information allowing to verify claims isn't part of 3000 or so words allowed by the journal.

      I think peer review need to adopt a model that is close to open source - if you are publishing, you have to also open your data, so your findings can be independently verified by anyone, and not just 3 often not randomly selected peers.
      • by ArhcAngel ( 247594 ) on Tuesday August 14, 2018 @10:47AM (#57123328)
        You mean like when Ancel Keys (a psychologist) took data on diets in regions around the world and hypothesized that heart disease was caused from eating fat? The only problem was he excluded data from his research that disproved his hypothesis. Like France where an extremely high fat diet showed very low instances of heart disease. His hypothesis was never proven but it has influenced our diets for the last 60 years. I switched to a ketogenic diet last year and am the healthiest I have ever been.
        • by Archtech ( 159117 ) on Tuesday August 14, 2018 @11:07AM (#57123454)

          Ancel Keys was a physiologist, not a psychologist. There is a difference.

          In some ways Keys was an American archetype. His overriding concern seems to have been to build up his reputation, glorify himself, and belittle anyone who dared to disagree with him. The sad thing is that he was extremely clever and capable of conscientious work - until he got carried away by his cholesterol hypothesis. Then, when it was no longer possible to maintain that cholesterol in food was harmful, he switched to attacking saturated fat and red meat.

          My favourite Keys episode concerns his "research" into the diets of Mediterranean peoples. His researchers inquired, rather perfunctorily, what people ate and drank in various countries.

          They came to the conclusion that the people of Crete owed their good health and long lives to a diet low in meat; this was later developed into the "Mediterranean Diet". Unfortunately, one of the weeks during which the survey was carried out in Crete fell within Lent, when the local people fasted - avoiding meat among other foods.

          Similar mistakes were made in countries such as Italy, France and Spain, whose people ate (and still do) far more meat than Keys admitted.

          See, for an introductory account, https://www.diabetes.co.uk/in-... [diabetes.co.uk]

          • by Joosy ( 787747 )

            In some ways Keys was an American archetype. His overriding concern seems to have been to build up his reputation, glorify himself, and belittle anyone who dared to disagree with him.

            True! Only Americans are like that.

            • In some ways Keys was an American archetype. His overriding concern seems to have been to build up his reputation, glorify himself, and belittle anyone who dared to disagree with him.

              True! Only Americans are like that.

              I did not assert that all Americans are like Keys, or that only Americans have those characteristics. I said that Keys was typical, in some ways, of a certain type of American.

              archetype
              n noun
              1 a very typical example.
              2 an original model.
              3 Psychoanalysis (in Jungian theory) a primitive mental image inherited from the earliest human ancestors and supposed to be present in the collective unconscious.
              4 a recurrent motif in literature or art.

              DERIVATIVES
              archetypal adject

          • Sorry but Nina Teicholz is a fraudster, who has been successful in building an empire pushing dangerous nonsense. This site (thescienceofnutrition.wordpress.com) provides excellent detailed exposures of her BS [wordpress.com]. What is come down to is that essentially nothing in her magnum opus "The Big Fat Surprise" is true, and the attacks an Ancel Keys are without merit. Teicholz is not mistaken, she is engaging in deliberate deception.

            Don't believe me? Go to the site, read what Seth Yoder reveals about Teicholz's lies (

        • by Nidi62 ( 1525137 )

          I switched to a ketogenic diet last year and am the healthiest I have ever been.

          Small sample size, but I recently switched to a "mostly" keto diet and am down about 10 lbs in the last 2 weeks after not losing weight while working out for the past 2-3 months. Cut out most carbs but still trying to keep fats in check to a moderate degree. Hoping I can keep up the pace.

    • What's the answer then?

      Peer review of publications themselves, with results published to permit submitters to make intelligent selections. Maybe there ought to be a journal about journals.

    • by jellomizer ( 103300 ) on Tuesday August 14, 2018 @10:58AM (#57123412)

      The Publish or perish system is really at fault. It really puts more pressure on scientist trying to hype up their research, vs using their time to actually study, measure, modify... The actual science.

      How much good science is going unnoticed because there is some low level scientist out there not getting noticed because he lacks the charisma to make a compelling journal entry.

      • The Publish or perish system is really at fault.

        So then what do we replace it with?

        he lacks the charisma to make a compelling journal entry.

        Writing a good paper has little to do with charisma.

      • by Phat_Tony ( 661117 ) on Tuesday August 14, 2018 @03:21PM (#57125964)
        That's right. My wife is a scientist, she has a PhD in pharmacology.

        Talking to her through her undergrad, PhD, and Postdoc, I identified what I believe are three separate problems in science that each exacerbate the others and collectively are having a devastating effect on the field:

        1. Publish or perish.

        2. Nobody reports negative results

        3. Most scientists who are below the level of Principle Investigator for a lab are being assigned their projects.

        So it used to be, back when my parents got their doctorates, that a new scientist joined a lab, proposed their own research, conducted the research, wrote a thesis, and then defended it before committee. If the committee was decent, it didn't matter if the results were positive or negative. They grilled the candidate on how they generated their hypothesis, why the implications would be important whether positive or negative, how they set up the experiments and conducted them, how they analyzed and presented the results - basically, the candidate had to prove they knew everything it takes to operate effectively as a scientist.

        Maybe if the research was exceptional, the thesis advisor would also be recommended that it be submitted to a journal, but most of the theses just went to the institution's library. The point was to prove the candidate understood and could perform science as an academic exercise, not to contribute usefully to the field. Today that is completely different. Most PhD candidates are assigned a project by the PI of the lab they join. So right off the bat, you aren't differentiating people by the quality of their ideas, which is probably the most important trait for a scientist. Instead, the quality of the idea assigned to them is likely to have a huge impact on how their career goes. It's like randomly handing out career potential without regard for ability. And there is no point in a committee grilling them about the formation of the hypothesis or what positive or negative results would contribute to understanding, because they never came up with the hypothesis in the first place.

        Hence got your bad project." [youtube.com]

        Then they have to have one or more papers accepted by peer reviewed journals to get their PhD. The papers are their thesis, and as long as they were accepted for publication, defending them is perfunctory now. Their acceptance is the only real test to get the PhD.

        Which means nobody is trying to make sure the candidate actually understands and can perform science; supposedly the papers evidence that, but we all know that isn't really true now; the peer reviewers do not attempt to replicate the study or dig in deep enough to see if any of it is actually high quality work.

        Not being able to publish negative results means that if you are assigned a project that ends up indistinguishable from the null hypothesis, you can waste years of your life based on a luck-of-the-draw assignment, with no regard to your ability. Or... you can fudge the data.

        "Fudging the data" here doesn't even necessarily imply anything overtly malicious. Talking to people in the lab who'd discuss how experiment after experiment had failed to show the desired effect, and what experiment they were planning next to try to demonstrate it, my standard comment was "we need to pass a 95% confidence test, so we'd better plan about 20 experiments to prove it."

        A possible 4th thing to list here is that the system now takes so long to get through, from undergrad to PhD to a Post Doc or two or three, that by the time anybody gets to be a PI and actually start pursuing their own research ideas, they're past the age past scientists were when they achieved approximately every major breakthrough in the history of science.

    • by lgw ( 121541 )

      ... they have to weigh a lot of factors - including costs to publish ... What's the answer then? I don't know. Nobody does.

      In fiction publishing, there's a clear, bright-line rule: any publishing arrangement in which money flows from the author to the publisher at any point in the process is a scam.* It's astonishing to those outside academia that it's different there. Heck, my one published academic paper paid a tiny honorarium IIRC, and they sent us some free off-prints, but that was ... more than 20 years ago.

      *Obviously, if you're intentionally "vanity" self-publishing, knowing the books will never see the inside of a boo

    • What's the answer then? I don't know. Nobody does. BioXiv (and others like it) offer an interesting possibility but that isn't without pitfalls (not the least of which is that a paper there that gets rejected in a journal is somewhat more difficult to resubmit elsewhere).

      Hackaday has started its own journal [hackaday.io]

      It's free, and it wants to become an actual journal with all the rigor and benefits of the mainstream journals.

      It also wants to navigate away from some of the problems we see with current journals, such as publishing negative results (which is allowed), citation inflation, and so on.

      It currently has one issue with one paper, and has an open call for more papers.

      It targets citizen science, and we're seeing a lot of that in the hacker community, but would welcome and accept

      • What's the answer then? I don't know. Nobody does. BioXiv (and others like it) offer an interesting possibility but that isn't without pitfalls (not the least of which is that a paper there that gets rejected in a journal is somewhat more difficult to resubmit elsewhere).

        Hackaday has started its own journal [hackaday.io]

        ...{snipsnip}

        There's an opportunity here to start something new and avoid all the pitfalls we keep hearing about.

        A noble effort. How do you avoid the challenge of accepting, certifying or publishing inaccurate works when a majority of the reviewers are bots or paid button-clickers? By "majority" I mean enough "peers" to control what gets accepted as truth? When enough peers agree with a paper that says "2 + 2 = 5" then it becomes the accepted standard.

        • by xvan ( 2935999 )
          "peer", is science, insn't every random Joe, but another member of academia versed enough on your field to understand your paper. You could easily have a credit system, where users earn reputation for publishing and reviewing works but loss reputation for passing works with glaring errors. Once a work is published it can still be criticized by the public community with arguments on issues with the paper right next to the article (a'la wikipedia talk).
          • The bots are smart enough to correctly peer-review irrelevant papers and accumulate reputation points over time. When the bot-herder wants to publish a bogus paper, then there's the possibility that the majority of selected reviewers will be under their control. If they don't like the odds then they can withdraw the paper and resubmit it later with modified title until they get a favorable number of their reviewer-bots appointed as peer reviewers.

            The "public community" is useless as that is even more easi

    • As a scientist I run in to this all the time. Everyone would love to get all their work into Nature, Science, and Cell; but they know the reality is that very little gets published in those journals. Then they look in to other journals with lower impact factors and they have to weigh a lot of factors - including costs to publish and the expected length of time to get a publishing decision. Some journals aren't forthcoming with either of those, either. Then we see new open-access journals popping up with official sounding names all the time. They promise quick turn-around, low publication costs (sometimes even free), and their open-access setup is generally already compliant with NIH and NSF requirements. If all we want to do is get the manuscript out and move on, these can look very tempting. What's the answer then? I don't know. Nobody does. BioXiv (and others like it) offer an interesting possibility but that isn't without pitfalls (not the least of which is that a paper there that gets rejected in a journal is somewhat more difficult to resubmit elsewhere).

      Isn't "Publish or Perish" the mantra for tenureship? As noted in the article, the journals contain a mix of actual research papers and total fiction. It's a bit disquieting to know that a doctor recommending some combination of medications or perform surgery has been getting his information from those totally fictitious papers published in an inadequately-reviewed journal.

      So why would a person or group knowingly publish incorrect information with the intent of deceiving others? Someone, some person or

      • the journals contain a mix of actual research papers and total fiction.

        It's really, really, exceptionally hard to come up with a ratio of how many are made up. They come up, and they come up in nearly every journal you can think of, but how often do they make it through is really hard to determine.

        In part this is due to how peer review works. Reviewers are there to read the paper and check it to make sure it is scientifically sound and that it meets the criteria of the journal. However, the reviewers are not paid to actually reproduce the experiments in said paper; indee

    • by Uecker ( 1842596 )

      There is only a problem if you think everything published in science must be true. This isn't the case and never was. There was a time, there was no peer review. And also otherwise good journals sometimes publish bad papers. So the solution for scientists is obvious: Do not believe everything you read. If a bad journal published only bad science, nobody will bother to read it and people who publish there do not gain reputation. So there is not really a problem for science... It is a problem though for stupi

  • by sinij ( 911942 ) on Tuesday August 14, 2018 @10:32AM (#57123202)
    Peer review process is insufficient to guarantee scientific rigor is practiced. For example, you have whole disciplines, like gender studies, going off the deep end and into mysticism, unfalsifiable claims, and politically-driven demagoguery and peer review does nothing to curtail even the worst of these excesses.

    So how are these pay to play journals are categorically different from, for example, a "legitimate" journal of Feminist Studies?
    • by Brett Buck ( 811747 ) on Tuesday August 14, 2018 @10:36AM (#57123242)

      Peer review can easily *enforce* all these ridiculous political biases. Essentially, you submit a paper to the very people who came up with the biases in the first place for their approval. Almost anything outside the 3 fundamental hard sciences is subject to this effect.

      • You mean like any publication in the economics field?
      • by sinij ( 911942 )
        More so, even if you set out to disprove a paper - publishing "unable to duplicate findings" paper is even harder than methods paper. Almost nobody is interested in doing this, almost nobody would publish your results, and as such questionable papers go unchecked. This leads to "body of knowledge" poisoned by bad data.
      • by lgw ( 121541 )

        Almost anything outside the 3 fundamental hard sciences is subject to this effect.

        String theory says hi.

        The bigger problem seems to be papers that contain no reproducible studies to begin with. Gender studies journals could, hypothetically, be full of good statistical research and reproducible measures and analysis.

        Meanwhile, over in the world of bio-chem, people just make up synthesis steps (well, they try something resonable, it doesn't work, but they claim it does to make quota). At least those results are hypothetically falsifiable: some auditing has been done to show that maybe ha

        • String theory is a different issue, mathematically sound and objectively consistent with all observations - but intrinsically indistinguishable. That's a lot different from something like sociology where it is inherently subjective.

          • by lgw ( 121541 )

            I don't see an objective difference. You do know there are papers published in string theory journals full of philosophical rambling and not an equation to be seen? "Mathematically sound and objectively consistent with all observations" is not sufficient to be science, let alone good science (nor is it, strictly speaking, necessary). Making specific, falsifiable, predictions about observations, however, is necessary, and string theory never delivered. What a tremendous waste of genius.

            Sociology is not i

  • by timholman ( 71886 ) on Tuesday August 14, 2018 @10:50AM (#57123358)

    Every week, I get one or more solicitations inviting me to be a keynote speaker at a conference, serve as an editor of a journal, or submit an invited paper, all from conferences and journals that I've never heard of before. In the grand scheme of things, I am far from being an academic superstar. I can only imagine how much worse it must be for some of my colleagues.

    What is surprising to me about this story isn't that some researchers are padding their CVs with publications in bogus journals, but that they aren't being called on it. If I were to list such a publication on my own annual report, my department chair would have me in his office in an instant, demanding to know why I was trying to damage the school's reputation with such idiocy.

    So the question is this: why isn't this oversight also taking place at Harvard, Yale, and Stanford? Is there really so little departmental supervision that researchers at those schools can actually get away with this?

    • Perhaps nowadays reputation is increasingly taking second place to money. Without money, reputation is not considered valuable. With money, who needs reputation?

    • by m00sh ( 2538182 )

      Every week, I get one or more solicitations inviting me to be a keynote speaker at a conference, serve as an editor of a journal, or submit an invited paper, all from conferences and journals that I've never heard of before. In the grand scheme of things, I am far from being an academic superstar. I can only imagine how much worse it must be for some of my colleagues.

      What is surprising to me about this story isn't that some researchers are padding their CVs with publications in bogus journals, but that they aren't being called on it. If I were to list such a publication on my own annual report, my department chair would have me in his office in an instant, demanding to know why I was trying to damage the school's reputation with such idiocy.

      So the question is this: why isn't this oversight also taking place at Harvard, Yale, and Stanford? Is there really so little departmental supervision that researchers at those schools can actually get away with this?

      Who cares anymore? You can just put a PDF up somewhere of your work.

      Peer reviews are a joke. It's a waste of time trying to satisfy some random person's demand that some completely non-related issue be resolved. With the communication system of 1 message back and forth every 2-3 months, it takes years to get something done. Unless you know your peers and then the whole thing is just a formality.

      Respected journals and conferences have as much garbage as the next. 99% of the papers are only good for their

  • the team analyzed over 175,000 articles published in predatory journals and found hundreds of papers from academics at leading institutions, as well as substantial amounts of research pushed by pharmaceutical corporations, tobacco companies, and others. Last year, one fake science institution run by a Turkish family was estimated to have earned over $4 million in revenue through conferences and journals.

    What's the criteria used in determining a publication "fake" and/or "predatory"?

    Is it the accuracy and re

    • by bsDaemon ( 87307 )

      If, after they publish you, they ask you to buy copies... like those pay-to-be-published "literary journals" that publish whatever crap poem someone who buys 5 copies submits. -- I'd definitely say that's a fake journal.

      • by mi ( 197448 )

        If, after they publish you, they ask you to buy copies...

        The words "ask", "buy", "copies" and "copy" aren't present in TFA...

        I'd definitely say that's a fake journal.

        That may — or may not — qualify as "predatory", but certainly not "fake"...

        like those pay-to-be-published "literary journals" that publish whatever crap poem someone who buys 5 copies submits

        What, I guess, you are trying to say, is that a publication funded by the published authors rather than readers/subscribers is "fake".

        I would

      • I would argue that just because Beatrix Potter had to self publish the first printings of Peter Rabbit doesn't make the publisher she paid any less legitimate or Peter Rabbit any less a classic.

        There is a difference between crappy poor quality publisher and a fake publisher one being that the published material never materializes.

        The literary collections that publish poetry for someone who buys a copy is called a vanity publisher they are no less a publisher and some of them publish best sellers also.

        • by mi ( 197448 )

          Maybe, we can conclude, that such submitter-funded publications, while not necessarily "fake, can not be treated as evidence, the author is a "real" scientist (or writer, or poet). Because being established as such requires paying customers. So, Ms. Potter could not claim being a writer when publishing her book at her own expense on account of simply having been published — but only on the merits of the contents.

          Which takes us back to TFA: if what its authors recognize — based on contents

          • In this case it's the difference between a disreputable publication and reputable publication. Even a real scientist can be duped into publishing in a disreputable journal.

            • by mi ( 197448 )

              difference between a disreputable publication and reputable publication

              You replaced "fake" with "disreputable" — still without explaining, what that means... What is it, that makes a publication "disreputable"?

  • I wouldn't at all be surprised if the above are what are really behind this sort of shenanigans. The former just want to disrupt the U.S. as much as possible; the latter sincerely believes that all science is evil and of Satan, and would love nothing better than to discredit all of it. We're talking the anti-vaxxers and the like, here, as well as out-and-out Dominionists, real facts, real truth, and encouraging people to think for themselves are all diametrically opposed to their agenda.
  • Let's call it for what it is: Post Truth means having an opinion that runs contrary to what monied interests desire you to have.

    And that's a good thing because it means you aren't being manipulated to make someone richer.

  • by bettodavis ( 1782302 ) on Tuesday August 14, 2018 @11:40AM (#57123672)
    When the amount of published papers become a performance metric.

    People will fulfill it, regardless if they are good papers.
  • I get dozens of requests a week to submit papers to obviously fake journals or attend fake conferences. It is not hard to establish that they are, in fact, fake. If people submit to these places without doing any due diligence on where they are submitting, how can I trust their scientific results?

    Scepticism is fundamental to science. This should also apply to where you choose to publish.

  • Most of the fake science was generated!

    https://pdos.csail.mit.edu/arc... [mit.edu]

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...