Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Science Technology

AI-Generated Science 32

Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates."

"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.

Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
This discussion has been archived. No new comments can be posted.

AI-Generated Science

Comments Filter:
  • Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck". But I expect active reviewers will now get flooded with crappy papers that look good. A pretty bad development.

    • Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck". But I expect active reviewers will now get flooded with crappy papers that look good. A pretty bad development.

      Do that and you'll get slammed. Suppose you call someone out who *didn't* use AI to write their paper, or suppose you call out someone who used the AI as a crutch, rewriting the AI sentences in their own words.

      That's a recipe for controversy, and your reputation will/might/could take a big hit.

      • Re:Don't do that (Score:5, Informative)

        by gweihir ( 88907 ) on Monday March 18, 2024 @02:37PM (#64325603)

        You do not seem to have experience reviewing papers. Obviously, the concrete words would need to be more polite, but that is it. Also, reviewers are anonymous and hence no reputation hit. At worst, the concrete venue will stop asking you to review.

        • I think the point is that you shouldn't reject the paper simply for how it's written, but for the value of its contents. If it's anything like the AI papers I've seen shared, the "science" they contain is nonsense to garbage regardless of the LLM-like dialect. However if someone uses AI to write the copy, but the science is well done and the data supports their claims I have no problem with it.

          • by gweihir ( 88907 )

            Obviously. But who really will use AI to write a paper that has good content? I can see somebody using DeepL or the like to translate it, but that is about it.

            Just as a remark, I have reviewed horribly written papers (probably Chinese) with reasonable contributions as "accept with major revision of the language", so that the authors knew it was worthwhile investing into some help with the English writing.

      • Your comments have merit, but there are some caveats or flaws in that.

        1 - What you say may be true in context of a face-to-face discussion, but that is not how the review process works. Reviewers are anonymous. Furthermore, reviewers' comments and suggestions are passed along to the authors who then have the opportunity to respond, clarify, edit and amend their papers.

        2 - As you said, "Suppose you call someone out who *didn't* use AI to write their paper ..." If a reviewer called me out for that, I would

      • Suppose you call someone out who *didn't* use AI to write their paper, or suppose you call out someone who used the AI as a crutch, rewriting the AI sentences in their own words.

        In those cases, are they really qualified to write such a paper then?

      • Do that and you'll get slammed.

        Never been on the end of the review process, eh?

        Reviewers are notoriously rude! I've had everything from ESL speakers lean in with criticism of grammar (I'm a native speaker) and be wrong across the board, to not knowing some of the most major results in maths of the last 1000 years (and not even from the last 200 years), and refuse to accept my paper as a result.

        It's almost akin to not accepting pi has been proven irrational.

        That's a recipe for controversy, and your reputati

    • Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck".

      If the AI is improving grammar, conciseness, readability that's fine. Likely beneficial.

      The AI is only a problem if it is doing science and cannot explain its work. If it can explain the formulas, algorithms, etc just like human based research what is the problem?(*) Sure, if its using an ML system that has to be trusted, well that's a leap of faith and not science.

      (*)Other than the current state of AI where it seems lying and fabricating data is allowed.

      • If the AI is improving grammar, conciseness, readability that's fine. Likely beneficial.

        Maybe the first. AI just writes bland, samey mush that gets very repetitive and incredibly dull. Thing is scientific papers are meant to provide insight which is something AI cannot do.

    • These are SUPPOSED to be peer reviewed

      Check out this humdinger someone found on twitter the other day;-
      https://www.sciencedirect.com/... [sciencedirect.com]

      On the last paragraph before the conclusion;-

      In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a

  • by Flavianoep ( 1404029 ) on Monday March 18, 2024 @02:19PM (#64325555)

    This kind of problem have many causes, but one of them is pressure to get articles being published continually, without any regard for the quality of what is being published, and for the fact that to publish something really well researched and significant demands time.

  • "Scientists", "reviewers". We don't punish people hard enough, or at all, for lying, and it shows.

  • by nightflameauto ( 6607976 ) on Monday March 18, 2024 @02:38PM (#64325607)

    When the boss asks why the thing that nobody could make a decision on is completed, I'll respond, "As of my last knowledge update, management consensus could not be communicated properly to the coding authority. Please update knowledge again when management consensus can be verified."

    When the wife asks for supper ideas, "As of my last knowledge update, I was quite fond of most things you cook. Please do give us further knowledge updates in this field."

    Mom asks why I don't come over so often, "As of my last knowledge update, visits with the parental unit of female biological standing are often negative overall experiences. Please provide avenues for positive outcomes in future inquiries on this topic."

  • It's fake, AI generated papers masquerading as science. Get the wording right.

    All these fake journals need to be peer reviewed out of existence, or at least out of any legitimate database.

    • Not just that, they need to be blacklisted so that no paper appearing in any of those journals is eligible to be cited in any paper appearing in any reputable journal, and any paper citing even one of them is automatically rejected by any proper journal.
  • GenAI will be a real boon for sensational news media publications which always are on the lookout for some "scientists say" basis to spread this or that social agenda. It's almost always junk science, or not even science at all but an opinion. This will only make this effect worse, further pushing the divide between rational people and those who are driven by social consensus and emotion.

    It's exhausting.

  • Published scientific papers include language that appears to have been generated by AI-tools

    Which is perfectly fine if the AI is being used to improve grammar, spelling, conciseness, readability etc.

    It's only AI generated content that is the problem, it's here that science can fail as a leap of faith is necessary to assume the AI/ML black box did a legitimate computation. An AI/ML system must be able to show its work, show its algorithms, just like a human researcher.

    If it claims to have a better formula, and can show how it was derived great.

    If it claims to be a better job are telling an

  • Analogy to the future of science degrees:
    "Joe and Frito are walking through a Costco, much bigger than what was in Joe's time
    Greeter: Hi, welcome to Costco. I love you.
    Frito: Yeah, I know this place pretty good. I went to law school here.
    Joe: You went to law school at Costco?
    Frito: So did my father. Thank God for being a legacy, or else I might not have gotten in."

    Need I say anything about Harvard recently? And Caltech dropped use of SATs and the admissions director pledged to bring in 50% women and forei

  • ...should be banned from publication for life
    AI watermarking or other proof of origin is essential and needs to be a top priority

  • AI is Dumb as Rocks [slashdot.org], and since it is trained on other people's data, isn't publishing an AI's work^H^H^Hresult just theft or copyright infringement? Today's AI can not form a new idea, it can only mimic what has already been written or described. As a tool fine, but using any results in an official capacity is just plain dumb.

    If the scientific community isn't careful, they might become a laughing stock of white noise. Whoops too late.

Air pollution is really making us pay through the nose.

Working...