Researchers Demonstrate Two-Track Algorithm For Detecting Deepfakes (ieee.org) 41
An anonymous reader quotes a report from IEEE Spectrum: Researchers have demonstrated a new algorithm for detecting so-called deepfake images -- those altered imperceptibly by AI systems, potentially for nefarious purposes. Initial tests of the algorithm picked out phony from undoctored images down to the individual pixel level with between 71 and 95 percent accuracy, depending on the sample data set used. The algorithm has not yet been expanded to include the detection of deepfake videos.
One component of the algorithm is a variety of a so-called "recurrent neural network," which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes stand out at the single-pixel level. Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters -- almost as if it were performing an image compression, as when you click the "compress image" box when saving a TIFF or a JPEG. These filters, in a mathematical sense, enable the algorithm to consider the entire image at larger, more holistic levels. The algorithm then compares the output of the pixel-by-pixel and higher-level encoding filter analyses. When these parallel analyses trigger red flags over the same region of an image, it is then tagged as a possible deepfake. The deepfake-detecting algorithm has been described in a recent IEEE Transactions on Image Processing.
One component of the algorithm is a variety of a so-called "recurrent neural network," which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes stand out at the single-pixel level. Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters -- almost as if it were performing an image compression, as when you click the "compress image" box when saving a TIFF or a JPEG. These filters, in a mathematical sense, enable the algorithm to consider the entire image at larger, more holistic levels. The algorithm then compares the output of the pixel-by-pixel and higher-level encoding filter analyses. When these parallel analyses trigger red flags over the same region of an image, it is then tagged as a possible deepfake. The deepfake-detecting algorithm has been described in a recent IEEE Transactions on Image Processing.
So? (Score:2, Informative)
So this thing basically helps me train my model to generate indistinguishable images. Good jorb.
Re:So? (Score:5, Funny)
Obligatory XKCD: https://xkcd.com/810/ [xkcd.com]
What if ... (Score:2)
Don't. Wake. Them. Up.
Neat But Not For Long (Score:3)
That's neat but in the long term the advantage belongs to the fakers. Like with computer security the attackers only need to succeed in fooling the algorithm every once in a while and they will inevitably manage to get a copy or otherwise evaluate their techniques against it.
However, it's just not that big of a deal. We managed for thousands of years without photography and just used authority and trust.
Re: Neat But Not For Long (Score:2)
We were very few at the time. Do you think it would also work with the billions that we are today?
Works for now but.... (Score:3)
....this is the just the first step. These detected features can be passed into a DL generalized adversarial network (GAN) to automatically remove the features from the images, making harder-to-detect deepfakes available in 5....4.....3....2....1.....
Re:Works for now but.... (Score:4, Insightful)
Yeah, this is a bit silly. DeepFake is trained against another neural network who's job is to detect fake images. This algorithm just gives it another critic to learn to fool, making the fakes even better.
Re:Works for now but.... (Score:5, Interesting)
Nine times out of ten, the word "just" precedes depowering the cerebral cortex before reaching a second pertinent conclusion.
This feedback loop escalates the commitment required to stay in the game. Dilettantes will soon be shoved aside. It might even be the case that the computational intensity required to foil all the critics becomes quite high and so the cost of making a quality fake becomes non-trivial. This is how the counterfeit system has long worked. It's not that the forgers have lost the race, but forgers with the necessary skills to produce a convincing fake are few and far between, and even when you have the skills, the process isn't cheap. Hence fake paper stays at a dull roar.
Any procedure you can formally define automatically spells out its own fatal weakness. This circularity is old hat.
Godel, Escher, Bach [wikipedia.org] — 1979
But interestingly, Hofstadter's cleverness in these matters very nearly made his own book translation proof.
So it wasn't translation proof after all, if you were prepared to play professional hardball. Hofstadter's text, however, ignores the serious brow sweat that went into fabricating I Cannot Be Played on Record Player X (Paul Erdos was never quite the same afterwards).
For some reason it has always been the done thing when discussing logic to ignore money.
Re: (Score:3)
Any procedure you can formally define automatically spells out its own fatal weakness. This circularity is old hat.
This would apply most to the deepfake detector, which is (hand) optimized to find a number of specific flaws (e.g. excessive smoothing at the fake/real boundaries). Once you have access to the deepfake detector, the generator network is free to explore any method to work around the detector. This is the easier job.
Use case (Score:3)
This is how the counterfeit system has long worked. It's not that the forgers have lost the race, but forgers with the necessary skills to produce a convincing fake are few and far between, and even when you have the skills, the process isn't cheap.
The counterfeit is a bad example in this context. Most of the people making counterfeit are usually making them to earn money, thus as many as them want the least detectable counterfeit.
This feedback loop escalates the commitment required to stay in the game. Dilettantes will soon be shoved aside. It might even be the case that the computational intensity required to foil all the critics becomes quite high and so the cost of making a quality fake becomes non-trivial.
Most of the people who are into Deepfake, are doing it for [youtube.com] entertainment [youtube.com]. (Completely SFW, for once. Not everybody does deepfake for porn).
They are not interested into making the most perfectly un-detectable fake, they just want to make a convincing enough in order to make their joke.
Only a small fraction of deepfakers are i
Certificate EXPIRED!! (Score:1)
Damit, it takes Internet Explorer to log into this site! That's what happens when your Lets Encrypt SSL cert EXPIRED TODAY. Fucking hell SMH
"depending on the sample data set used" (Score:2)
Am I the only one who is a little skeptical on the detection rate "between 71 and 95 percent accuracy, depending on the sample data set used"?
For all I know it's possible the the Algorithm has gotten very good at detecting only the batch of photos the algorithm has been given to train on. Use it to test a future set of "new internet hoax of the day" on things that has never happened before and the results could be much more doubtful.
Re: (Score:2)
The detection rate would also depend on the quality of the input data. If you get a 4k resolution image to work with, it's easier to detect tampering compared to a low resolution version of the exact same data. To quote the article:
Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters—almost as if it were performing an image compression, as when you click the “compress image” box when saving a TIFF or a JPEG.
Feeding the original image through a low quality JPEG compressor, and then decoding it again, should already make it more challenging to detect.
We are screwed ... (Score:2, Interesting)
So, here's the problem with this ... eventually nobody believes anything they don't already believe.
They're not going to care that
Researchers use basic logic (Score:1)
We've already got higher-level deepfakes that fool them, so their research isn't to be trusted except for amateur deepfake attempts.
Using this to improve on deep fakes? (Score:1)
SO basically (Score:2)
Generative Adversarial Network (GAN) (Score:2)
A simple GAN architecture built into in your AI DeepFake software will easily get around this, as soon as the analytical engine in this study is publicly known and included. The new modified GAN will then use this same analytical method to train itself on how to defeat this very same analysis and continue improving itself for each and every image frame you produce. Any DeepFake detection algorithm suffers from the same problem as the anti-virus software problem. Once you have a sample that slipped by the pr
Statistical Range (Score:1)
So, the results are basically somewhere between "barely better than a coin flip" to "still not accurate enough to use when things matter."
Imperceptibly? (Score:3)
If the alteration is imperceptible then you haven't changed anything from the perspective of the viewer. That's useful for steganography, but otherwise useless — the purpose of altering a video is specifically to change what is perceived when it is viewed.
*sigh* (Score:2)
This will only improve the quality of deepfakes