Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Science

Use of AI Is Seeping Into Academic Journals - and It's Proving Difficult To Detect 40

The rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear. Wired: Journals are taking a patchwork approach to the problem. The JAMA Network, which includes titles published by the American Medical Association, prohibits listing artificial intelligence generators as authors and requires disclosure of their use. The family of journals produced by Science does not allow text, figures, images, or data generated by AI to be used without editors' permission. PLOS ONE requires anyone who uses AI to detail what tool they used, how they used it, and ways they evaluated the validity of the generated information. Nature has banned images and videos that are generated by AI, and it requires the use of language models to be disclosed. Many journals' policies make authors responsible for the validity of any information generated by AI.

Experts say there's a balance to strike in the academic world when using generative AI -- it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech -- when used in many kinds of writing -- has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing. If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. Not disclosing use of AI would mean authors are passing off generative AI content as their own, which could be considered plagiarism. They could also potentially be spreading AI's hallucinations, or its uncanny ability to make things up and state them as fact.
This discussion has been archived. No new comments can be posted.

Use of AI Is Seeping Into Academic Journals - and It's Proving Difficult To Detect

Comments Filter:
  • When I was in school that was considered cheating.

    • by bradley13 ( 1118935 ) on Monday August 21, 2023 @04:30PM (#63786424) Homepage
      This. Using AI to write a paper about novel research is impossible. It can improve wording, correct grammar, even help come up with draft taxt. That's all fine, what's the problem? If authors submit a paper that contains crap, like nonexistent references? Then it's a crap paper, whether generated by AI or by a human. Reject it, done.
      • This. Using AI to write a paper about novel research is impossible. It can improve wording, correct grammar, even help come up with draft taxt. That's all fine, what's the problem?

        If authors submit a paper that contains crap, like nonexistent references? Then it's a crap paper, whether generated by AI or by a human. Reject it, done.

        Agreed, I don't actually find generative AI that useful for writing. If I'm writing I'm trying to say something specific, and if I want to say something specific I might as well write it myself instead of trying to coax is out of an LLM. The AI makes great sounding filler, but unless you're a high school student desperately trying to hit a word count then why are you wasting words with filler?

        Editing is another thing entirely, especially since a lot of researchers can't write at the level of a native Englis

      • by serviscope_minor ( 664417 ) on Tuesday August 22, 2023 @03:21AM (#63787240) Journal

        Reject it, done.

        Oh FFS this is not how it works.

        It takes time and effort to give a paper a fair shake, especially as many papers are written by non native speakers. Papers are rarely outright rejected, instead given a number of suggestions for improvement and resubmission.

        It is a very time consuming process.

        The peer review system is already at near collapse, it could easily be tipped over the edge with a flood of crappy AI written papers.

    • When I was in school that was considered cheating.

      I've often heard it said that a calculator is just a tool that will do you little good if you don't understand the underlying concept well enough to input the equation properly in the first place.

      AI on the other hand, is literally asking a machine to do the work for you.

      • by CAIMLAS ( 41445 ) on Monday August 21, 2023 @05:03PM (#63786488)

        Clearly stated - by someone who doesn't understand how these LLM AI models work, and hasn't used them enough to see how silly their statement is.

        There is no technical task sufficiently short enough that AI will not fuck it up. It can answer "What is X?" and "How is x different than y?" fairly reliably, but it falls far short on complex ideas.

        Anything beyond that... well, even including that in many cases... you'd better know what you're doing.

        Case in point: I was looking for a quote, for which I could only paraphrase. I knew the meaning of the quote, but I wanted to offer attribution. Search engines weren't helping. ChatGPT was able to provide me the correct quote based on my paraphrase. It could not have done that if I didn't know how to paraphrase the quote.

        Writing the paper is not the hard part. It's the gathering of the data and the research. ChatGPT is quite good, however, at helping you jot a bunch of notes and synopsi, data points, etc. and have it help you organize your ideas into a coherent idea others can understand. It's still -your work-, in the same way that someone using grammar assistance in Word is still writing a paper.

        • I'm reminded of a line from The Carousel of Progress at Disney World:

          "But we do have television, when it works."

          Lots of new tech is rough around the edges in the beginning. Eventually these chatbots will reliably give the correct answers, just the same as how you don't see many TV repair shops around these days.

        • by jimll ( 1642281 )

          There is no technical task sufficiently short enough that AI will not fuck it up.

          My corollary to this: "There is no technical task sufficient short enough that a random human will not fuck it up". We've all met folk like that after all...

      • by r0nc0 ( 566295 )
        I'm reminded of Isaac Asimov's short story The Feeling of Power.
      • by Roger W Moore ( 538166 ) on Monday August 21, 2023 @06:42PM (#63786668) Journal

        AI on the other hand, is literally asking a machine to do the work for you.

        No, it is exactly like a calculator. If you tell a calculator to add two numbers it does all the work for you but, in a paper, nobody cares whether you did the calculation by hand or used a calculator, they only care that the result is correct. This is exactly the same for AI. I don't care whether an AI, a human assistant, or the author themselves wrote the words in the paper I am reading I only care that the paper is accurate and easy-to-understand description of what was done and the results.

        The work that matters in a scientific paper is the experiment, study, calculation etc. that the paper is reporting on and, at least so far, AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that's great!

        • AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that's great!

          AI is also capable of generating a massive crap flood of vaguely plausible looking papers, and will be used in the service by desperate people in the awful mill of academia.

          How do you think the already overstretched peer review system is going to cope with that?

          • You can probably use ChatGPT as a tool to suggest improvements to a paper you've already written. I've never tried this, but it does seem like it would be possible. I'm sure you can tell it to spit out a whole paper too, but it's no different than any other tool that can be used improperly and cause harm.
          • AI is also capable of generating a massive crap flood of vaguely plausible looking papers

            Not without some effort on the submitter's part because left to its own devices ChatGPT scores under 50% on a first-year undergrad physics exam so no paper it writes will sound plausible without a lot of effort. If you have someone putting that much effort into attempting fraud then you report them to their university and ban further submissions from them.

      • It doesn't do it well enough, alas.

        I tried to get an AI to write my introduction on a paper. Got 12 paragraphs. I had to remove 8 of them from the start since they weren't useful at all, just vapid rambling. Of the four I retained, one turned out to be bogus (the dates were off), and two of them turned out to be meaningless once contemplated at any depth. The one paragraph left was too casual to be used, so I refactored that one into a single line, and added two citations to back it up.
        In summation, 12 para

    • by r0nc0 ( 566295 )
      We were allowed to use slide rules but not calculators.
    • Yes, but now there are also a bunch of Tips to help you write a great essay. I recently encountered this problem and one site helped me a lot. An essay is a written piece of work that presents a writer's https://www.nursingpaper.com/e... [nursingpaper.com] perspective, analysis, or argument on a particular topic. It typically consists of an introduction, body paragraphs, and a conclusion. Essays are used for various purposes, including academic assignments, expressing personal opinions, providing information, and persuasive w
  • After all, doesn't reality have a liberal bias? ChatGPT seems uniquely qualified to deliver this kind of content.

  • By the many gods, using computers as a tool to advance, who knows what this will lead to? Next thing you know scientists won't know how to use a protractor, or properly erase a clay tablet!
  • AI excels at two specific kinds of speech: Corporate speech that doesn't mean jackshit, and scientific article language where no figure of speech is allowed.
    I had to write both kinds within my career and every time I did, I felt that I had to put my humanity aside to write "correctly".

    Does it surprise anyone that AI will flourish when writing for both?

  • by VeryFluffyBunny ( 5037285 ) on Monday August 21, 2023 @05:13PM (#63786504)
    ...is not the first language for a great number of researchers. It's really difficult to write academic papers let alone one in a foreign language. Also, universities are notoriously bad at teaching their students how to write academically. In the vast majority of cases, they pretty much left to work it out for themselves; sink or swim, as it were.

    It's really not at all surprising that researchers, after months or even years of stressful, intensive, uncertain work, would use any tools available to them to get the final part of a research project finished & ready for submission & peer-review. And they know they're gonna have to read that feedback from anonymous reviewer #2. What a dickhead!

    I think what LLMs mean, at least for scientific publishing, is that peer-reviewers & editors will have to do their jobs all the more diligently to make sure that what's getting published is suitably high-quality & correct. Remember that those fake/parody papers got submitted & accepted into journals long before LLMs were ever a thing. In other words, I don't think LLMs are the root of the problem.
  • AIs writes papers that sound like they are based on fact but are actually just bullshit with no basis in replicated science.

    Psychology PhDs on the other hand...
    • Psychology PhDs on the other hand...

      The worst of all are probably sociologists. They try to make their papers and articles look like hard science by including formulas, but they never show how they were derived because they aren't. They just write down something that shows how they think various factors are related without ever trying to find out if they're right.
      • ...then they email it to a woke media friend and:
        Headlines, grants, tenureship in the All White Males Are Bastards Department and yet more woke nonsense promoted as fact.
  • by Petersko ( 564140 ) on Monday August 21, 2023 @05:45PM (#63786560)

    Fighting this change is pointless. What we should do is double down on shaming bad science. AI-assisted or not, the name on the paper is the one who should take the blame if it's wrong, unsupportable, or slipshod.

    Author: "But... but... the AI did that! I didn't mean that!"

    Community: "There, there... we understand. But... you'll need to wear this special cone-shaped hat for the next ten years or so... we're not angry. Just disappointed."

  • by joe_frisch ( 1366229 ) on Monday August 21, 2023 @06:08PM (#63786602)
    Many people who review scientific papers are not really paid to do that work. Its "expected" as part of their scientific jobs, but in many cases not supported by project budgets. Most scientists put a great deal of care into reviews because they understand how important they are to the scientific process, but researchers are human, and when things get busy, they may not put in as much time as is really necessary. AI generated content is likely to contain more technical errors than human generated materials.

    An AI generated reference list might contain real references that do not support the statements in the paper, or hallucinated references. An overworked reviewer might not check every reference in a paper.

    An AI generated summary might contain statements that are not supported by the remainder of the paper and again the overworked reviewer might not notice.

    Reviewing AI generated content would be like reviewing papers written by a dishonest researcher who sometimes just make up material and tries to slip it into the paper in a way that won't be noticed. Its true that there are dishonest researchers, but they are rare - and sometimes the material they generate stays in the literature for a long time before it is caught.

    Finally there is the tricky question of what to do if a AI generated section of a paper contains plagiarized, or fraudulent information. Normally intentionally producing such material would be a career-ender, but if the researcher claims that they "just missed it" what do you do? The precedent set in the court system is very concerning: Attorneys who presented a judge with fabricated information and lies created out of whole cloth to support their case ended up with only minor punishments because they claimed ignorance of how AI worked. Will the same be done for AI generated fraudulent scientific papers?
    • I think this might be a push for actual paid reviewers (i.e., scientists who get paid for the review, and can be fired for not doing their job). It may up the price of subscribing to scientific journals, but honestly I think this outcome would be preferred over what's in place right now. Plus, it would expand the market for many Ph.Ds in an over-saturated academic job market.
      • A few journals have professional reviewers - and maybe that is a good approach. It provides an economic incentive chain for a journal that wants to maintain its reputation Possibly research institutions would be OK with their scientists doing reviews under a contract with journals. The existing system of voluntary reviews is struggling in a world where funding is tight.

        There are still some potential issues with AI that would need to be addressed. One is using AI to generate a large number of versions
    • I'd like to hope that although the attorney who made an idiot of himself with AI only got a smack on the wrist this time was treated mercifully because it's a totally new area; if there's a repeat by anybody it should result in a significant period of disbarment.

    • Maybe something like the karma system /. uses could help
    • I, for one, rarely check references when reviewing papers. I look to see that the paper is self-contained, expresses the idea well, and that the results presented support the conclusion of the paper. If the data is bullshit, I have no way of knowing that. If the bibliography is bullshit, I have no way of knowing that. Well occasionally I might know, in which case I would say something. I'm sure most scientific reviewers do the same. Every now and then, someone gets busted for faking it. The biggest problem
  • I push the buttons and chat-bing-gpt-ai pumps out a paper.

    I send it off for publication.

    If it is worthwhile, great, we gained something.

    If it is not worthwhile, someone must have disproven it because that is the by default requirement for something to be deemed as not worthwhile. Also great: another thing in the "never attempt again" category.

    If your field can not handle AI generated text, your field sucks.

    In other words: go try doing something like this in physics.

    c.e.: yes I oversimplify and handwave thin

  • by Whateverthisis ( 7004192 ) on Monday August 21, 2023 @08:18PM (#63786826)
    1) the lion's share of academic papers are poorly written. Scientists broadly are not good writers. If you have insomnia, go read an academic journal. ChatGPT may be fallible, but at least it uses proper syntax and writes something that is interesting.

    2) Maybe most papers will be actual science, ie reproducible [bbc.com].

  • If AI merely writes the paper using the data the scientist has and adding genuine references, it is doing nothing more than the scientist would have done. AI potentially adds massive value by referring to overlooked sources (make sure they're checked!). There is a serious risk that the AI will repeat quotes without crediting them - plagiarism - which needs to be checked for.

    Overall AI generated articles seem to add value. However there needs to be greater accountability for them from the authors, who need t

  • Are the claims in the paper correct? That's what should matter.
  • When enough people cheat their way through school their won't be anyone left who can develop AI.

If you steal from one author it's plagiarism; if you steal from many it's research. -- Wilson Mizner

Working...