AI-Generated Science 32
Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates."
"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.
Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.
Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
Great, even more crappy papers... (Score:2)
Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck". But I expect active reviewers will now get flooded with crappy papers that look good. A pretty bad development.
Don't do that (Score:2)
Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck". But I expect active reviewers will now get flooded with crappy papers that look good. A pretty bad development.
Do that and you'll get slammed. Suppose you call someone out who *didn't* use AI to write their paper, or suppose you call out someone who used the AI as a crutch, rewriting the AI sentences in their own words.
That's a recipe for controversy, and your reputation will/might/could take a big hit.
Re:Don't do that (Score:5, Informative)
You do not seem to have experience reviewing papers. Obviously, the concrete words would need to be more polite, but that is it. Also, reviewers are anonymous and hence no reputation hit. At worst, the concrete venue will stop asking you to review.
Re: (Score:3)
I think the point is that you shouldn't reject the paper simply for how it's written, but for the value of its contents. If it's anything like the AI papers I've seen shared, the "science" they contain is nonsense to garbage regardless of the LLM-like dialect. However if someone uses AI to write the copy, but the science is well done and the data supports their claims I have no problem with it.
Re: (Score:3)
Obviously. But who really will use AI to write a paper that has good content? I can see somebody using DeepL or the like to translate it, but that is about it.
Just as a remark, I have reviewed horribly written papers (probably Chinese) with reasonable contributions as "accept with major revision of the language", so that the authors knew it was worthwhile investing into some help with the English writing.
Re: (Score:3)
Your comments have merit, but there are some caveats or flaws in that.
1 - What you say may be true in context of a face-to-face discussion, but that is not how the review process works. Reviewers are anonymous. Furthermore, reviewers' comments and suggestions are passed along to the authors who then have the opportunity to respond, clarify, edit and amend their papers.
2 - As you said, "Suppose you call someone out who *didn't* use AI to write their paper ..." If a reviewer called me out for that, I would
Re: (Score:1)
Suppose you call someone out who *didn't* use AI to write their paper, or suppose you call out someone who used the AI as a crutch, rewriting the AI sentences in their own words.
In those cases, are they really qualified to write such a paper then?
Re: (Score:3)
Do that and you'll get slammed.
Never been on the end of the review process, eh?
Reviewers are notoriously rude! I've had everything from ESL speakers lean in with criticism of grammar (I'm a native speaker) and be wrong across the board, to not knowing some of the most major results in maths of the last 1000 years (and not even from the last 200 years), and refuse to accept my paper as a result.
It's almost akin to not accepting pi has been proven irrational.
That's a recipe for controversy, and your reputati
ML a leap of faith, not science (Score:3)
Fortunately, I have stopped reviewing papers a few years ago. Personally, I would rate such stuff an immediate "reject, write it yourself you lazy fuck".
If the AI is improving grammar, conciseness, readability that's fine. Likely beneficial.
The AI is only a problem if it is doing science and cannot explain its work. If it can explain the formulas, algorithms, etc just like human based research what is the problem?(*) Sure, if its using an ML system that has to be trusted, well that's a leap of faith and not science.
(*)Other than the current state of AI where it seems lying and fabricating data is allowed.
Wolf or Wilderness? (Score:2)
Lol. Many probably understand the narrow scope of data that ML "comprehends" and is working with (if it was never written, its not available). Associating that type of data regurgitation to an idiot savant makes it abundantly clear how narrow that scope actually is.
Not at all, take an ML model that differentiates between wolves and dogs. How does it do that, what algorithm does it use? We don't know. It works fine on the test images, which were gathered by the same people who gathered the training images. The paper claims great success.
When others try to replicate the results with test data they independently gathered it fails. Repeatedly, by multiple teams.
Post-mortem discovers that the training data primarily used pictures of wolves in the wild and pictures of
Re: (Score:2)
Shock of all shocks, you don't have a fucking clue how science works. You're way out of your depth here. Go waste someone else's time.
Grad school researching an AI topic say otherwise.
If you can't describe the "internal algorithm" that an ML model uses, then you are working on faith. Faith that the training set closely resembles reality well enough, as suggested by the percentage of test data that works properly. Faith and suggestions are not testable hypothesis, they are not science. Engineering can be practical, science needs to be rigorous.
Re: (Score:2)
If the AI is improving grammar, conciseness, readability that's fine. Likely beneficial.
Maybe the first. AI just writes bland, samey mush that gets very repetitive and incredibly dull. Thing is scientific papers are meant to provide insight which is something AI cannot do.
Re: (Score:2)
These are SUPPOSED to be peer reviewed
Check out this humdinger someone found on twitter the other day;-
https://www.sciencedirect.com/... [sciencedirect.com]
On the last paragraph before the conclusion;-
That is what you get with "publish or perish" (Score:5, Insightful)
This kind of problem have many causes, but one of them is pressure to get articles being published continually, without any regard for the quality of what is being published, and for the fact that to publish something really well researched and significant demands time.
Require citation of AI derived work (Score:2)
Cheaters all around (Score:2)
"Scientists", "reviewers". We don't punish people hard enough, or at all, for lying, and it shows.
I'm gonna start using that. (Score:4, Funny)
When the boss asks why the thing that nobody could make a decision on is completed, I'll respond, "As of my last knowledge update, management consensus could not be communicated properly to the coding authority. Please update knowledge again when management consensus can be verified."
When the wife asks for supper ideas, "As of my last knowledge update, I was quite fond of most things you cook. Please do give us further knowledge updates in this field."
Mom asks why I don't come over so often, "As of my last knowledge update, visits with the parental unit of female biological standing are often negative overall experiences. Please provide avenues for positive outcomes in future inquiries on this topic."
It isn't AI generated science (Score:2)
It's fake, AI generated papers masquerading as science. Get the wording right.
All these fake journals need to be peer reviewed out of existence, or at least out of any legitimate database.
Re: (Score:2)
A boon for trad media (Score:2)
GenAI will be a real boon for sensational news media publications which always are on the lookout for some "scientists say" basis to spread this or that social agenda. It's almost always junk science, or not even science at all but an opinion. This will only make this effect worse, further pushing the divide between rational people and those who are driven by social consensus and emotion.
It's exhausting.
Not a problem if grammar/spell check (Score:2)
Published scientific papers include language that appears to have been generated by AI-tools
Which is perfectly fine if the AI is being used to improve grammar, spelling, conciseness, readability etc.
It's only AI generated content that is the problem, it's here that science can fail as a leap of faith is necessary to assume the AI/ML black box did a legitimate computation. An AI/ML system must be able to show its work, show its algorithms, just like a human researcher.
If it claims to have a better formula, and can show how it was derived great.
If it claims to be a better job are telling an
Re: (Score:2)
Your ignorance knows no bounds. I'm not sure how you think science works, but it's obvious to everyone that you don't have a clue how science works.
If you can't describe the "internal algorithm" that an ML model uses, then you are working on faith. Faith that the training set closely resembles reality well enough, as suggested by the percentage of test data that works properly. Faith and suggestions are not testable hypothesis, they are not science. Engineering can be practical, science needs to be rigorous.
how Rome fell (Score:1)
Analogy to the future of science degrees:
"Joe and Frito are walking through a Costco, much bigger than what was in Joe's time
Greeter: Hi, welcome to Costco. I love you.
Frito: Yeah, I know this place pretty good. I went to law school here.
Joe: You went to law school at Costco?
Frito: So did my father. Thank God for being a legacy, or else I might not have gotten in."
Need I say anything about Harvard recently? And Caltech dropped use of SATs and the admissions director pledged to bring in 50% women and forei
Anyone who submits AI generated papers... (Score:2)
...should be banned from publication for life
AI watermarking or other proof of origin is essential and needs to be a top priority
aka Junk Science. (Score:2)
If the scientific community isn't careful, they might become a laughing stock of white noise. Whoops too late.