Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Network Software Science Technology

Researchers Demonstrate Two-Track Algorithm For Detecting Deepfakes (ieee.org) 41

An anonymous reader quotes a report from IEEE Spectrum: Researchers have demonstrated a new algorithm for detecting so-called deepfake images -- those altered imperceptibly by AI systems, potentially for nefarious purposes. Initial tests of the algorithm picked out phony from undoctored images down to the individual pixel level with between 71 and 95 percent accuracy, depending on the sample data set used. The algorithm has not yet been expanded to include the detection of deepfake videos.

One component of the algorithm is a variety of a so-called "recurrent neural network," which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes stand out at the single-pixel level. Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters -- almost as if it were performing an image compression, as when you click the "compress image" box when saving a TIFF or a JPEG. These filters, in a mathematical sense, enable the algorithm to consider the entire image at larger, more holistic levels. The algorithm then compares the output of the pixel-by-pixel and higher-level encoding filter analyses. When these parallel analyses trigger red flags over the same region of an image, it is then tagged as a possible deepfake.
The deepfake-detecting algorithm has been described in a recent IEEE Transactions on Image Processing.
This discussion has been archived. No new comments can be posted.

Researchers Demonstrate Two-Track Algorithm For Detecting Deepfakes

Comments Filter:
  • So? (Score:2, Informative)

    by Anonymous Coward

    So this thing basically helps me train my model to generate indistinguishable images. Good jorb.

  • What if it determines that all of our current universe is a Deep Fake? Except for maybe one person.

    Don't. Wake. Them. Up.
  • That's neat but in the long term the advantage belongs to the fakers. Like with computer security the attackers only need to succeed in fooling the algorithm every once in a while and they will inevitably manage to get a copy or otherwise evaluate their techniques against it.

    However, it's just not that big of a deal. We managed for thousands of years without photography and just used authority and trust.

  • by craighansen ( 744648 ) on Tuesday July 30, 2019 @10:52PM (#59015684) Journal

    ....this is the just the first step. These detected features can be passed into a DL generalized adversarial network (GAN) to automatically remove the features from the images, making harder-to-detect deepfakes available in 5....4.....3....2....1.....

    • by ceoyoyo ( 59147 ) on Tuesday July 30, 2019 @11:43PM (#59015762)

      Yeah, this is a bit silly. DeepFake is trained against another neural network who's job is to detect fake images. This algorithm just gives it another critic to learn to fool, making the fakes even better.

      • by epine ( 68316 ) on Wednesday July 31, 2019 @12:20AM (#59015816)

        This algorithm just gives it another critic to learn to fool, making the fakes even better.

        Nine times out of ten, the word "just" precedes depowering the cerebral cortex before reaching a second pertinent conclusion.

        This feedback loop escalates the commitment required to stay in the game. Dilettantes will soon be shoved aside. It might even be the case that the computational intensity required to foil all the critics becomes quite high and so the cost of making a quality fake becomes non-trivial. This is how the counterfeit system has long worked. It's not that the forgers have lost the race, but forgers with the necessary skills to produce a convincing fake are few and far between, and even when you have the skills, the process isn't cheap. Hence fake paper stays at a dull roar.

        Any procedure you can formally define automatically spells out its own fatal weakness. This circularity is old hat.

        Godel, Escher, Bach [wikipedia.org] — 1979

        A phonograph dubbed "Record Player X" destroys itself by playing a record titled I Cannot Be Played on Record Player X (an analogy to Godel's incompleteness theorems).

        But interestingly, Hofstadter's cleverness in these matters very nearly made his own book translation proof.

        Because of other troubles translators might have retaining meaning, Hofstadter "painstakingly went through every sentence of Godel, Escher, Bach, annotating a copy for translators into any language that might be targeted."

        So it wasn't translation proof after all, if you were prepared to play professional hardball. Hofstadter's text, however, ignores the serious brow sweat that went into fabricating I Cannot Be Played on Record Player X (Paul Erdos was never quite the same afterwards).

        For some reason it has always been the done thing when discussing logic to ignore money.

        • Any procedure you can formally define automatically spells out its own fatal weakness. This circularity is old hat.

          This would apply most to the deepfake detector, which is (hand) optimized to find a number of specific flaws (e.g. excessive smoothing at the fake/real boundaries). Once you have access to the deepfake detector, the generator network is free to explore any method to work around the detector. This is the easier job.

        • This is how the counterfeit system has long worked. It's not that the forgers have lost the race, but forgers with the necessary skills to produce a convincing fake are few and far between, and even when you have the skills, the process isn't cheap.

          The counterfeit is a bad example in this context. Most of the people making counterfeit are usually making them to earn money, thus as many as them want the least detectable counterfeit.

          This feedback loop escalates the commitment required to stay in the game. Dilettantes will soon be shoved aside. It might even be the case that the computational intensity required to foil all the critics becomes quite high and so the cost of making a quality fake becomes non-trivial.

          Most of the people who are into Deepfake, are doing it for [youtube.com] entertainment [youtube.com]. (Completely SFW, for once. Not everybody does deepfake for porn).
          They are not interested into making the most perfectly un-detectable fake, they just want to make a convincing enough in order to make their joke.

          Only a small fraction of deepfakers are i

  • by Anonymous Coward

    Damit, it takes Internet Explorer to log into this site! That's what happens when your Lets Encrypt SSL cert EXPIRED TODAY. Fucking hell SMH

  • Am I the only one who is a little skeptical on the detection rate "between 71 and 95 percent accuracy, depending on the sample data set used"?

    For all I know it's possible the the Algorithm has gotten very good at detecting only the batch of photos the algorithm has been given to train on. Use it to test a future set of "new internet hoax of the day" on things that has never happened before and the results could be much more doubtful.

    • The detection rate would also depend on the quality of the input data. If you get a 4k resolution image to work with, it's easier to detect tampering compared to a low resolution version of the exact same data. To quote the article:

      Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters—almost as if it were performing an image compression, as when you click the “compress image” box when saving a TIFF or a JPEG.

      Feeding the original image through a low quality JPEG compressor, and then decoding it again, should already make it more challenging to detect.

  • We are screwed ... (Score:2, Interesting)

    by Anonymous Coward

    One component of the algorithm is a variety of a so-called "recurrent neural network," which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes stand out at the single-pixel level.

    So, here's the problem with this ... eventually nobody believes anything they don't already believe.

    They're not going to care that

  • We've already got higher-level deepfakes that fool them, so their research isn't to be trusted except for amateur deepfake attempts.

  • It appears to me that there's the possibility for this neural network that detects deep fakes to be used as an adversarial network to improve deep fakes generators.
  • It can tell a photoshop because it has seen quite a few of them in its time and it can tell by the pixels... You can't make this shit up.
  • A simple GAN architecture built into in your AI DeepFake software will easily get around this, as soon as the analytical engine in this study is publicly known and included. The new modified GAN will then use this same analytical method to train itself on how to defeat this very same analysis and continue improving itself for each and every image frame you produce. Any DeepFake detection algorithm suffers from the same problem as the anti-virus software problem. Once you have a sample that slipped by the pr

  • So, the results are basically somewhere between "barely better than a coin flip" to "still not accurate enough to use when things matter."

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday July 31, 2019 @11:19AM (#59017590) Homepage Journal

    If the alteration is imperceptible then you haven't changed anything from the perspective of the viewer. That's useful for steganography, but otherwise useless — the purpose of altering a video is specifically to change what is perceived when it is viewed.

  • This will only improve the quality of deepfakes

Air pollution is really making us pay through the nose.

Working...