The Subtle Effects of Blood Circulation Can Be Used To Detect Deepfakes (ieee.org) 45
An anonymous reader quotes a report from IEEE Spectrum: This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, "FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals," the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye. In particular, video of a person's face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.
Deep fakes don't lack such circulation-induced shifts in color, but they don't recreate them with high fidelity. The researchers at SUNY and Intel found that "biological signals are not coherently preserved in different synthetic facial parts" and that "synthetic content does not contain frames with stable PPG." Translation: Deep fakes can't convincingly mimic how your pulse shows up in your face. The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person's face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it. In a newer paper (PDF), researchers showed that they "can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators was used to create a bogus video," the report adds.
Deep fakes don't lack such circulation-induced shifts in color, but they don't recreate them with high fidelity. The researchers at SUNY and Intel found that "biological signals are not coherently preserved in different synthetic facial parts" and that "synthetic content does not contain frames with stable PPG." Translation: Deep fakes can't convincingly mimic how your pulse shows up in your face. The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person's face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it. In a newer paper (PDF), researchers showed that they "can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators was used to create a bogus video," the report adds.
Comment removed (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Indeed. It is an arms-race and the only side that can win are the fake-makers.
My gan thanks you (Score:2)
As the discriminator I say thank you, this will finally defeat that pesky Generator.
As the Generator, I say thanks for providing new features to train for.
Re: (Score:1)
Re: (Score:2)
Likely phishing link.
Fuck you, spammer.
Re: (Score:2)
Exactly. Deep fakes are produced by a generator model that is trained to fool a fake detector. Making good fake detectors automatically means you can train better generators.
A false truth. (Score:5, Insightful)
If something can be used to detect fakery, it can be used to create fakery.
Re: (Score:3)
Re: (Score:2)
Deepfakes are produced with GANs [wikipedia.org] and this is exactly how they work. You pit a generator and a detector against each other and they compete, with the generator learning to create better fakes while the detector tries to better tell fake from real.
Re: (Score:1)
Re: (Score:3)
The best weapon against fake images is image search engines. Simply find the original image and present it as evidence that the fake one has been altered. Same with video.
Re: (Score:2)
Re: (Score:3)
Some people will see an obvious fake and still think it's real if someone they trust convinces them.
Re: (Score:2)
We.. some people believe the mots outrageous and obvious lies if they come from the right person. The part of the human race that can do fact-checking is rather small. The part that does believe fact-checking is not necessary, because they obviously would know is, unfortunately, rather large.
Re: (Score:2)
Some people will see an obvious fake and still think it's real if someone they trust convinces them.
Or if it confirms their strongly-held beliefs. Confirmation bias is very powerful.
Re: (Score:2)
Re: (Score:2)
You think so because it is less publicised, but look and ear at that: https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:2)
About that "for the time being..."
Vocodes: Vocal Playground [vo.codes]
Re: (Score:2)
Re: (Score:2)
There's this Jordan Peterson fake voice generator that was pretty good, but fortunately just used for memes.
Re: (Score:2)
If something can be used to detect fakery, it can be used to create fakery.
The actual false truth here, is believing there is a Justice system behind any of this.
The strength (or weakness) of the technology being used against you (that is why this technology exists) is not measured by the technology itself, but by the Legal system behind it.
You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.
Re: (Score:2)
You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.
Nah.
You should be asking yourself can a non-dystopian societies survive fake video?
Re: (Score:2)
You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.
Nah. You should be asking yourself can a non-dystopian societies survive fake video?
Society isn't even smart or strong enough to survive fake news, much less fake video.
Lossy Compression (Score:4, Interesting)
"real videos of people contain physiological signals that are not visible to the eye. "
Isn't this exactly the sort of information that is dropped in modern lossy video compression? The whole point of lossy compression is to ignore perceptually irrelevant detail.
Re: Lossy Compression (Score:1)
Re: (Score:2)
The only frame-by-frame video compression format that I know of (granted, I don't know many) is Motion-JPEG, all others formats have at least inter-frame compression as well. The modern ones that people actually use, anyway.
Re: (Score:2)
all others formats have at least inter-frame compression as well.
In some instances arguably too much inter-frame compression because it makes seeking far-far-far less efficient. This is a real issue thats been around for awhile so its hard to imagine the person you are replying to knowing anything at all meaningfully significant about the subject given their demonstrable lack of knowledge of todays basic low end video technology.
..and yet that ignorant fuck, who lets face it knows damn well that he doesnt know shit about video compression, decided to act like he did.
Re: (Score:2)
Are they really not visible to the eye? Do they actually produce no changes in the brain? The human eye is an incredibly sensitive instrument, albeit not without its failings.
An idea of good lossy compression is indeed to sacrifice non-perceptual data as needed to achieve better ratios. But what if it's something that we can perceive subconsciously? And maybe that sort of thing is how people can tell that Zuckerberg isn't human, for example?
This can directly be used ... (Score:2)
Re: (Score:2)
... as an adversarial network to train the next deepfake network, making it its own demise.
As long as they keep it secret and don't let it get onto sites like Slashdot where someone might hear about it they'll be fine.
Arms race starts in three, two, one... (Score:2)
"The researchers at SUNY and Intel found that 'biological signals are not coherently preserved in different synthetic facial parts'..."
Now that the fakers are aware of this problem, they soon will be.
They will fake also that (Score:2)
And make deep fakes more difficult to detect.
Deepfakes will remain behind the curve. (Score:1)
I think that deep fakes will remain behind the curve on this one for some time to come. Especially given that a fake needs to remain seen as a fake well into the future. Time belies all lies.
And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.
Rather than allowing the deep fake software to "do a better fake", they will, in their endeavors to produce a better fake, ultimately end up providing more data by which their work can be fingerprinted and identified. State based actors beware.
Re: (Score:2)
I think that deep fakes will remain behind the curve on this one for some time to come. Especially given that a fake needs to remain seen as a fake well into the future. Time belies all lies.
This technology exists for one reason. Now consider just how damn good we've gotten with other single-purpose designs, like guns.
Point here is your survival as an innocent victim on the worst end of a deepfake accusation, is completely dependent on the ability of your lawyer and your ability to pay for that defense. In this Reality, Time is measured by what you can afford in a Legal system that often puts Justice at such a premium that it's becoming quite out of reach for the common man. (Every 21st Cen
Re: (Score:2)
Point here is your survival as an innocent victim on the worst end of a deepfake accusation
Reasonable doubt defense. When deep fakes get that good, all anyone will have to do is to put forth the idea that the video evidence in question is a deep fake.
Re: (Score:2)
Point here is your survival as an innocent victim on the worst end of a deepfake accusation
Reasonable doubt defense. When deep fakes get that good, all anyone will have to do is to put forth the idea that the video evidence in question is a deep fake.
Yup. And then it's your lawyer against theirs.
Let's see how smart 12 average people are...good luck.
Makeup ?? (Score:1)
How does it handle thick makeup that some folks and newspeople wear?
How does it handle facemasks that many wear?
No where is linked summary does it talk about what device was used to film video that was tested. Nor quality of the video.
I'd think that cell phones and distance from subject would greatly impair detection.
They use the phrase "portrait videos" without specifically defining it. Probably requires unmasked, unmakeuped face clearly and closely to camera under good static lighting. Not that common
Finally we have this technique (Score:2)
For men only (Score:2)
With people wearing pancake-make-up, it's much harder.
Nothing to write home about (Score:1)