Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Science

All Science Journals Will Now Do an AI-Powered Check for Image Fraud (arstechnica.com) 16

The research publisher Science announced today that all of its journals will begin using commercial software that automates the process of detecting improperly manipulated images. From a report: The move comes many years into our awareness that the transition to digital data and publishing has made it comically easy to commit research fraud by altering images. While the move is a significant first step, it's important to recognize the software's limitations. While it will catch some of the most egregious cases of image manipulation, enterprising fraudsters can easily avoid being caught if they know how the software operates. Which, unfortunately, we feel compelled to describe (and, to be fair, the company that has developed the software does so on its website).

Much of the image-based fraud we've seen arises from a dilemma faced by many scientists: It's not a problem to run experiments, but the data they generate often isn't the data you want. Maybe only the controls work, or maybe the experiments produce data that is indistinguishable from controls. For the unethical, this doesn't pose a problem since nobody other than you knows what images come from which samples. It's relatively simple to present images of real data as something they're not. To make this concrete, we can look at data from a procedure called a western blot, which uses antibodies to identify specific proteins from a complex mixture that has been separated according to protein size. Typical western blot data looks like the image at right, with the darkness of the bands representing proteins that are present at different levels in different conditions.

This discussion has been archived. No new comments can be posted.

All Science Journals Will Now Do an AI-Powered Check for Image Fraud

Comments Filter:
  • by Anonymous Coward

    Given recent scandals, they might wanna check for plagiarism too. Apparently you can end up in charge of a major university with nobody bothering to check that you've plagiarized basically everything you've written. Though I do wonder how it should count if you plagiarize a paper but change the data to just be wrong.

    • Though I do wonder how it should count if you plagiarize a paper but change the data to just be wrong.

      LLM generated, then.

    • Nearly all publishers adhere to Crossref and therefore can run the Similarity Check during submission https://www.crossref.org/servi... [crossref.org] I think the scandals you mention could be in universities (master or PhD theses) rather than in journal papers.

    • Didn't Germany have a rash of this a while back? Politicians resigning when plagiarism was found in their Phds?
    • Apparently you can end up in charge of a major university with nobody bothering to check that you've plagiarized basically everything you've written

      Or plaigarize your dissertation [businessinsider.com] to become a professor at MIT while being married to a socialist who wants the government to once again bail out banks if necessary [marketwatch.com].

    • From the few examples I saw, Claudine Gay did not really plagiarize the way a lot of people think. I didn't see examples of her taking someone else’s ideas or discoveries and framing it as her own. She lazily modified some common language to describe certain results. The many body of work was not plagiarized .. at least not in what I saw. Has anyone done analysis on other PhD theses to see if they too borrow similar wordings and phrasing? All the comparisons I saw were ways to describe some results. N

  • It would be very interesting if they checked their existing publications for possible past frauds. Ideally someone would run it on multiple or all journal publishers, but I understand it can't be the AAAS (owner of Science journals) doing it themselves on other journals (to avoid libel accusations and taint their reputation even if they are in the right in court).

  • by smooth wombat ( 796938 ) on Thursday January 04, 2024 @04:47PM (#64132637) Journal
    Should be used on the political side to call out the liars who deliberately use someone else's pictures [thehill.com] and lie about it.
  • So instead of manipulating images, they'll just have AI generated ones and this tool will be useless to detect that. It might be obvious to a human observer when an agar plate has 6 fingers instead of none, but it won't show as having been manipulated from the original.

    • It's very much like drug testing in pro sports - they have to do what they can do to stop the most obvious abuses. And somebody who cheats has to defeat not only the current tests, but be willing to risk getting caught by new tests that might be developed down the road and applied retroactively.

      Unlike text, which can plausibly be anything that sounds reasonably OK, images of natural processes have structure that is not arbitrary, so it's not as hopeless as detecting AI-generated prose.

  • The title is a bit amusing, because it sounded like it was about all "science" journals rather than all Science journals until one read it. I'm reminded of how a few years ago there was a spammy vanity level journal called "Science and Nature" and someone pointed out that someone could publish it and then say "I've been published in Science and Nature."
  • that sounds like it will be great success ...

  • Problems
    1. The won't be able to catch AI-generated images because there is no "hi, I'm an AI-generated image" watermark generated by all image models.
    2. Suspicion isn't proof of anything. Similarly, there are no watermarks in an image to prove it was definitively modified, only that a human should review it and challenge its provenance. Image and video editing software exists that doesn't produce MICs or respond to Omron patterns. Hell, I can add EXIF data to media to say anything I want.

    Perhaps it would b

  • It will only work until the next iteration of AI comes.

    Basically the nature of these models is they learn from adversaries. If you have a "fake image detector", it is actually the teacher to a "fake image generator". (And that too is a teacher to a faker image generator, and in turn this continues).

    So, you are only good as long as you subscribe to the latest and best models, and only for a while.

    (Though as others suggested, if this is done retroactively, then things change).

It is now pitch dark. If you proceed, you will likely fall into a pit.

Working...