Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Science

Quality of Scientific Papers Questioned as Academics 'Overwhelmed' By the Millions Published (theguardian.com) 32

A scientific paper featuring an AI-generated image of a rat with an oversized penis was retracted three days after publication, highlighting broader problems plaguing academic publishing as researchers struggle with an explosion of scientific literature. The paper appeared in Frontiers in Cell and Developmental Biology before widespread mockery forced its withdrawal.

Research studies indexed on Clarivate's Web of Science database increased 48% between 2015 and 2024, rising from 1.71 million to 2.53 million papers. Nobel laureate Venki Ramakrishnan called the publishing system "broken and unsustainable," while University of Exeter researcher Mark Hanson described scientists as "increasingly overwhelmed" by the volume of articles. The Royal Society plans to release a major review of scientific publishing disruptions at summer's end, with former government chief scientist Mark Walport citing incentives that favor quantity over quality as a fundamental problem.

Quality of Scientific Papers Questioned as Academics 'Overwhelmed' By the Millions Published

Comments Filter:
  • Book cover (Score:5, Funny)

    by Zak3056 ( 69287 ) on Monday July 14, 2025 @01:48PM (#65520018) Journal

    A scientific paper featuring an AI-generated image of a rat with an oversized penis was retracted three days after publication

    If O'Reilly ever publishes a book on e.g. ChatGPT, this needs to be the cover.

  • Create a consortium of the most prestigious brands, whatever they may be.

    As a consortium, they agree to agree to publish no more than one paper from any given author (or partial author) per year, and no more than X papers from any particular country per year (whatever X is appropriate).

    • I would simply make any papers citing said bad paper also be automatically thrown out of the system. If you cited a paper and didn't actually vet it, well you get what you deserve.

      bad citations are also a huge growing problem according to previous Slashdot news.
  • add a deposit (Score:5, Interesting)

    by OrangeTide ( 124937 ) on Monday July 14, 2025 @02:06PM (#65520066) Homepage Journal

    Require a nominal fee, say 100 Euros, to submit a paper for review. If the paper is determined to be AI slop, keep the deposit. If the paper is accepted or rejectee, then refund the deposit. Sitting on the deposits for months makes it slightly harder to scale the submission of thousands of papers.
    Use the proceeds to fund a reputation system where repeat offended can finally be discovered and turned away.

    • by HiThere ( 15173 )

      People already pay to have scientific papers published, so that would have at most minor effect.

      • by iisan7 ( 914423 )
        Indeed many journals have publication fees. Very few pass any share of those fees to the reviewers, which could expedite and improve reviews. (granted, it's hard to design a system that is robust to intention to subvert from its participants.... I've seen AI-generated reviews, and payments to reviewers could encourage more of that).
    • Good proposal but the problem with these is the so-called AI detectors right now are at best questionable and at worst a bigger problem than the AI.

      And it doesn't necessarily help with other uses of AI. For example, while I don't write peer reviewed papers, I often write a lot for various purposes in work, including some emails. I find myself writing out all of my thoughts, which can easily fall into the TLDR category, and then I copy the entire thing into AI and ask it to keep all of my core elements

  • by alvinrod ( 889928 ) on Monday July 14, 2025 @02:09PM (#65520072)
    Universities seem to be doing exactly nothing to change this or to fix the culture and incentives that led to this problem. The very idea of "publish or perish" is what should perish. Any government funding should require preregistration of the study including the full set of hypotheses and methods. Give funding to making sure results are replicable instead of chasing after the next big thing.

    The system itself needs to be reengineered so that it doesn’t devolve into some kind of cargo cult. It may be painful, but that's the cost of leaving the problem to fester this long.
    • Universities seem to be doing exactly nothing to change this or to fix the culture and incentives that led to this problem. The very idea of "publish or perish" is what should perish.

      Universities rank conferences and journals. While lower tier conferences have exploded, the top tier have not. There are also a lot of arXiv papers. This is not necessarily a bad thing. The top conferences have the top experts in the field reviewing submissions. At least this is the case for the field that I'm familiar with (computer architecture and systems).

  • by Retired Chemist ( 5039029 ) on Monday July 14, 2025 @02:10PM (#65520086)
    although I am sure it is getting worse. The combination of the publish or perish system in academia with the vast expansion of the number of people doing "research" is the real issue. The ability to use LLM system to spam out writing has only made it worse. Twenty-five years ago, I can remember finding papers (in high quality journals) with titles like "Study of something part 26". Instead of studying a subject and writing a paper, they were publishing every intermediate study as a separate article to increase their number of items in their resume. As long as research output is judged by quantity rather than quality this problem will remain. The real issue is how to you find the quality in among all the garbage.
    • "minimum publishable unit"

      You're right, it's a long-recognized problem, but a viable alternative has yet to surface.

      Well, computer science has sort of deserted universities into corporations, where most impactful R&D is now conducted, and work is judged by people's willingness to use it (open source), or to pay for it (commercial). Does it really matter whether Hunggingface publishes a "scientific paper" or a good postings on a blog? Doesn't seem to...

      • For Computer Science that is probably true, although I suspect a lot of best work is never made truly public in any way. From my own experience, most R&D done for corporations only makes it into the literature (if it does at all) in the form of patents, which are intentionally written to be as confusing as possible. For non-computer academic fields including hard sciences, this has been essentially unmanageable for quite a while and is clearly getting worse.
  • This is just a consequence of the entire LLM based AI hype cycle. We saw it in fiction a few years back. We're seeing it hit scientific papers now. When AI *can* be used to generate massive amounts of absolute bullshit, no one is there to ask if it should be used to generate massive amounts of absolute bullshit. It simply *IS* used to generate massive amounts of absolute bullshit until the genuine efforts at good faith creation are buried under the crapflood avalanche. This is the reality of any area that r

  • Academic Churn in Scientific Publishing: Systemic Incentives and the Prioritization of Quantity Over Quality

    Abstract

    The phenomenon of academic churn in scientific publishing, characterized by the rapid production of research articles driven by pressures to maximize publica- tion counts, often compromises quality in favor of quantity. This trend is propelled by systemic academic incentives, wherein metrics such as publication volume, cita- tion counts, and h-index scores significantly influence career
  • by drnb ( 2434720 ) on Monday July 14, 2025 @02:50PM (#65520218)
    I hate to say this, but make a first pass using AI. Of course AI cannot reason and truly understand a real paper. However it can do some mechanical things like look up all the references, see how legit they look. Make sure there is no circular "logic" going on, etc. Maybe evaluate the body for contradictions, it probably can't reason which is correct but spotting contradictions, concepts with no supporting evidence, etc.

    It's just a tool, like spell check. Its output can help the human decide whether the reject or read.

    Then again, maybe some spot checks would necessary to make sure the AI review is accurate. An AI might give a false review on a paper describing the dangers of AI. :-)
  • It was just a crappy journal without a real review process.

    People laugh about the rat with the huge penis, but if you look at the paper it has also "scientific diagrams" created by an image generator that are unreadable and probably meaningless. And I am not sure if the text wasn't generated (I do not know enough about biology to verify if it makes sense).

    There are journals that just do not care and journals that take money for publication. Both just want to publish as much as possible. The answer is, that

    • by HiThere ( 15173 )

      IIUC, all the journals require payment for the article to be published. Some of them are *only* in it for the money, but all of them *are* in it for the money.

  • It's bad enough in the US and other fully developed countries. This problem isn't coming from the top universities or the government labs. R1 universities have armies of top-notch students and tons of money. The government labs simply have tons and tons and oh-my-god-so-much-money. But researchers and profs at the mid-tier US universities are expected meet similar expectations with a fraction of the students and the resources. Some of them will turn to cheating, especially if their job is on the line unless
  • Ain't nobody questioning it. You don't explode the number of journals and fill them with comparable quality. By increasing the quantity of published material about 100-fold, you're increasing the quantity of quality by about 10%. And you're making that a lot harder to find. There aught to be a few journals of high quality stuff and some place just to get all that other stuff written down. Much of the blame lies with the publishing companies and a lot of the blame comes from "Publish or perish".
    • by TWX ( 665546 )

      I didn't think I'd see Usenet's 'Eternal September' apply to scientific research.

      If the number of scientists and doctoral-students increased then it would follow that the number of publications would need to increase, but it sounds like at some point the wheels came off and the rigor in evaluating research was left behind.

  • by timholman ( 71886 ) on Monday July 14, 2025 @04:52PM (#65520632)

    Like most of the ills of academia, this one is largely self-inflicted.

    For decades, promotion & tenure committees have held junior faculty to standards that they themselves could never have achieved. You'd have P&T committee members passing judgment on assistant professors who had published more journal papers than any three of them put together. It didn't matter - the bar was constantly raised.

    So faculty increasingly turned to the MPU (minimum publishable unit) strategy - chopping up what should have been one really good paper into five or six mediocre ones. Even before AI exacerbated this problem, the crapflood of journal submissions was overwhelming reviewers and journal editors.

    And now? It's become even more nightmarish. Many of my colleagues simply refuse to review papers any longer. They're done wasting their time going through ultra-dense text with perfect grammar and spelling that was clearly written by ChatGPT. There have even been Ph.D. students who attempted to pass their qualifying exams with immaculate presentations on material that they could not answer even the simplest questions about. So now we're moving into the second phase of the rot, where students who earned fraudulent Ph.D.'s become the next generation of faculty.

    Academia will not be a pretty sight twenty years from now.

  • For as long as institutions keep focusing on the number of papers published by researchers in order to promote the latter, researchers have every incentive to publish as much as they can. Also, the fact that publishers have a vested interest to publish (they get money from those who want to be published) does not help. Anyway, I remember reviewing a paper which was very little more than a rehash of one of the authors' previous papers setting N = 5 . Never mind what N was. I rejected it, but I wonder whether
  • Mainstream media should be forbidden from reporting on any paper that has not been replicated, properly.

    This will encourage people to do replication in the real sciences, and we will never hear about psychology ever again.
  • We've got too many scientists. Since the only way to get noticed and/or to justify their existence is to have results, they get pressured into publishing half-arsed papers just to be out there, quantity over quality.

    It's also got to do with generational differences and the fact that real science takes a long time while younger generations expect results today.

Biology is the only science in which multiplication means the same thing as division.

Working...