Use of AI Is Seeping Into Academic Journals - and It's Proving Difficult To Detect 40
The rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear. Wired: Journals are taking a patchwork approach to the problem. The JAMA Network, which includes titles published by the American Medical Association, prohibits listing artificial intelligence generators as authors and requires disclosure of their use. The family of journals produced by Science does not allow text, figures, images, or data generated by AI to be used without editors' permission. PLOS ONE requires anyone who uses AI to detail what tool they used, how they used it, and ways they evaluated the validity of the generated information. Nature has banned images and videos that are generated by AI, and it requires the use of language models to be disclosed. Many journals' policies make authors responsible for the validity of any information generated by AI.
Experts say there's a balance to strike in the academic world when using generative AI -- it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech -- when used in many kinds of writing -- has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing. If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. Not disclosing use of AI would mean authors are passing off generative AI content as their own, which could be considered plagiarism. They could also potentially be spreading AI's hallucinations, or its uncanny ability to make things up and state them as fact.
Experts say there's a balance to strike in the academic world when using generative AI -- it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech -- when used in many kinds of writing -- has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing. If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. Not disclosing use of AI would mean authors are passing off generative AI content as their own, which could be considered plagiarism. They could also potentially be spreading AI's hallucinations, or its uncanny ability to make things up and state them as fact.
Did they use a calculator too? (Score:1)
When I was in school that was considered cheating.
Re: Did they use a calculator too? (Score:4, Interesting)
Re: (Score:3)
This. Using AI to write a paper about novel research is impossible. It can improve wording, correct grammar, even help come up with draft taxt. That's all fine, what's the problem?
If authors submit a paper that contains crap, like nonexistent references? Then it's a crap paper, whether generated by AI or by a human. Reject it, done.
Agreed, I don't actually find generative AI that useful for writing. If I'm writing I'm trying to say something specific, and if I want to say something specific I might as well write it myself instead of trying to coax is out of an LLM. The AI makes great sounding filler, but unless you're a high school student desperately trying to hit a word count then why are you wasting words with filler?
Editing is another thing entirely, especially since a lot of researchers can't write at the level of a native Englis
Re: Did they use a calculator too? (Score:4, Insightful)
Reject it, done.
Oh FFS this is not how it works.
It takes time and effort to give a paper a fair shake, especially as many papers are written by non native speakers. Papers are rarely outright rejected, instead given a number of suggestions for improvement and resubmission.
It is a very time consuming process.
The peer review system is already at near collapse, it could easily be tipped over the edge with a flood of crappy AI written papers.
Re: (Score:2)
When I was in school that was considered cheating.
I've often heard it said that a calculator is just a tool that will do you little good if you don't understand the underlying concept well enough to input the equation properly in the first place.
AI on the other hand, is literally asking a machine to do the work for you.
Re:Did they use a calculator too? (Score:4, Insightful)
Clearly stated - by someone who doesn't understand how these LLM AI models work, and hasn't used them enough to see how silly their statement is.
There is no technical task sufficiently short enough that AI will not fuck it up. It can answer "What is X?" and "How is x different than y?" fairly reliably, but it falls far short on complex ideas.
Anything beyond that... well, even including that in many cases... you'd better know what you're doing.
Case in point: I was looking for a quote, for which I could only paraphrase. I knew the meaning of the quote, but I wanted to offer attribution. Search engines weren't helping. ChatGPT was able to provide me the correct quote based on my paraphrase. It could not have done that if I didn't know how to paraphrase the quote.
Writing the paper is not the hard part. It's the gathering of the data and the research. ChatGPT is quite good, however, at helping you jot a bunch of notes and synopsi, data points, etc. and have it help you organize your ideas into a coherent idea others can understand. It's still -your work-, in the same way that someone using grammar assistance in Word is still writing a paper.
Re: (Score:2)
I'm reminded of a line from The Carousel of Progress at Disney World:
"But we do have television, when it works."
Lots of new tech is rough around the edges in the beginning. Eventually these chatbots will reliably give the correct answers, just the same as how you don't see many TV repair shops around these days.
Re: (Score:1)
There is no technical task sufficiently short enough that AI will not fuck it up.
My corollary to this: "There is no technical task sufficient short enough that a random human will not fuck it up". We've all met folk like that after all...
Re: (Score:1)
Not the work that matters (Score:5, Interesting)
AI on the other hand, is literally asking a machine to do the work for you.
No, it is exactly like a calculator. If you tell a calculator to add two numbers it does all the work for you but, in a paper, nobody cares whether you did the calculation by hand or used a calculator, they only care that the result is correct. This is exactly the same for AI. I don't care whether an AI, a human assistant, or the author themselves wrote the words in the paper I am reading I only care that the paper is accurate and easy-to-understand description of what was done and the results.
The work that matters in a scientific paper is the experiment, study, calculation etc. that the paper is reporting on and, at least so far, AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that's great!
Re: (Score:2)
AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that's great!
AI is also capable of generating a massive crap flood of vaguely plausible looking papers, and will be used in the service by desperate people in the awful mill of academia.
How do you think the already overstretched peer review system is going to cope with that?
Re: (Score:2)
Re: (Score:2)
AI is also capable of generating a massive crap flood of vaguely plausible looking papers
Not without some effort on the submitter's part because left to its own devices ChatGPT scores under 50% on a first-year undergrad physics exam so no paper it writes will sound plausible without a lot of effort. If you have someone putting that much effort into attempting fraud then you report them to their university and ban further submissions from them.
Re: (Score:2)
It doesn't do it well enough, alas.
I tried to get an AI to write my introduction on a paper. Got 12 paragraphs. I had to remove 8 of them from the start since they weren't useful at all, just vapid rambling. Of the four I retained, one turned out to be bogus (the dates were off), and two of them turned out to be meaningless once contemplated at any depth. The one paragraph left was too casual to be used, so I refactored that one into a single line, and added two citations to back it up.
In summation, 12 para
Re: (Score:1)
allowed to use slide rules but not calculators. (Score:2)
Don't forget the books of 4 place log tables, trig functions etc
Re: (Score:1)
Perfect tool for the job (Score:1, Funny)
After all, doesn't reality have a liberal bias? ChatGPT seems uniquely qualified to deliver this kind of content.
What ic computer? (Score:2)
Of course... (Score:2)
AI excels at two specific kinds of speech: Corporate speech that doesn't mean jackshit, and scientific article language where no figure of speech is allowed.
I had to write both kinds within my career and every time I did, I felt that I had to put my humanity aside to write "correctly".
Does it surprise anyone that AI will flourish when writing for both?
English... (Score:3)
It's really not at all surprising that researchers, after months or even years of stressful, intensive, uncertain work, would use any tools available to them to get the final part of a research project finished & ready for submission & peer-review. And they know they're gonna have to read that feedback from anonymous reviewer #2. What a dickhead!
I think what LLMs mean, at least for scientific publishing, is that peer-reviewers & editors will have to do their jobs all the more diligently to make sure that what's getting published is suitably high-quality & correct. Remember that those fake/parody papers got submitted & accepted into journals long before LLMs were ever a thing. In other words, I don't think LLMs are the root of the problem.
Psychology AI FTW (Score:2)
Psychology PhDs on the other hand...
Re: (Score:2)
The worst of all are probably sociologists. They try to make their papers and articles look like hard science by including formulas, but they never show how they were derived because they aren't. They just write down something that shows how they think various factors are related without ever trying to find out if they're right.
Re: (Score:2)
Headlines, grants, tenureship in the All White Males Are Bastards Department and yet more woke nonsense promoted as fact.
No objection. (Score:4)
Fighting this change is pointless. What we should do is double down on shaming bad science. AI-assisted or not, the name on the paper is the one who should take the blame if it's wrong, unsupportable, or slipshod.
Author: "But... but... the AI did that! I didn't mean that!"
Community: "There, there... we understand. But... you'll need to wear this special cone-shaped hat for the next ten years or so... we're not angry. Just disappointed."
The problem is extra workload for reviewers (Score:5, Insightful)
An AI generated reference list might contain real references that do not support the statements in the paper, or hallucinated references. An overworked reviewer might not check every reference in a paper.
An AI generated summary might contain statements that are not supported by the remainder of the paper and again the overworked reviewer might not notice.
Reviewing AI generated content would be like reviewing papers written by a dishonest researcher who sometimes just make up material and tries to slip it into the paper in a way that won't be noticed. Its true that there are dishonest researchers, but they are rare - and sometimes the material they generate stays in the literature for a long time before it is caught.
Finally there is the tricky question of what to do if a AI generated section of a paper contains plagiarized, or fraudulent information. Normally intentionally producing such material would be a career-ender, but if the researcher claims that they "just missed it" what do you do? The precedent set in the court system is very concerning: Attorneys who presented a judge with fabricated information and lies created out of whole cloth to support their case ended up with only minor punishments because they claimed ignorance of how AI worked. Will the same be done for AI generated fraudulent scientific papers?
Re: (Score:2)
Re: (Score:2)
There are still some potential issues with AI that would need to be addressed. One is using AI to generate a large number of versions
Small punishment THIS time (Score:2)
I'd like to hope that although the attorney who made an idiot of himself with AI only got a smack on the wrist this time was treated mercifully because it's a totally new area; if there's a repeat by anybody it should result in a significant period of disbarment.
Re: (Score:2)
Re: (Score:1)
I am uncertain if this matters much. (Score:2)
I push the buttons and chat-bing-gpt-ai pumps out a paper.
I send it off for publication.
If it is worthwhile, great, we gained something.
If it is not worthwhile, someone must have disproven it because that is the by default requirement for something to be deemed as not worthwhile. Also great: another thing in the "never attempt again" category.
If your field can not handle AI generated text, your field sucks.
In other words: go try doing something like this in physics.
c.e.: yes I oversimplify and handwave thin
The papers will probably be better (Score:3)
2) Maybe most papers will be actual science, ie reproducible [bbc.com].
Getting the data / sources right (Score:2)
If AI merely writes the paper using the data the scientist has and adding genuine references, it is doing nothing more than the scientist would have done. AI potentially adds massive value by referring to overlooked sources (make sure they're checked!). There is a serious risk that the AI will repeat quotes without crediting them - plagiarism - which needs to be checked for.
Overall AI generated articles seem to add value. However there needs to be greater accountability for them from the authors, who need t
Who cares? (Score:2)
It'll stop eventually (Score:2)
When enough people cheat their way through school their won't be anyone left who can develop AI.