Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Science

The Subtle Effects of Blood Circulation Can Be Used To Detect Deepfakes (ieee.org) 45

An anonymous reader quotes a report from IEEE Spectrum: This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, "FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals," the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye. In particular, video of a person's face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don't lack such circulation-induced shifts in color, but they don't recreate them with high fidelity. The researchers at SUNY and Intel found that "biological signals are not coherently preserved in different synthetic facial parts" and that "synthetic content does not contain frames with stable PPG." Translation: Deep fakes can't convincingly mimic how your pulse shows up in your face. The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person's face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.
In a newer paper (PDF), researchers showed that they "can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators was used to create a bogus video," the report adds.
This discussion has been archived. No new comments can be posted.

The Subtle Effects of Blood Circulation Can Be Used To Detect Deepfakes

Comments Filter:
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday October 02, 2020 @10:45PM (#60567286)
    Comment removed based on user account deletion
  • A false truth. (Score:5, Insightful)

    by backslashdot ( 95548 ) on Friday October 02, 2020 @10:46PM (#60567290)

    If something can be used to detect fakery, it can be used to create fakery.

    • by mark-t ( 151149 )
      This. The same mechanisms that are used to detect fakes can ultimately be fed right back into the algorithms that produce them so as to be indistinguishable.
      • Deepfakes are produced with GANs [wikipedia.org] and this is exactly how they work. You pit a generator and a detector against each other and they compete, with the generator learning to create better fakes while the detector tries to better tell fake from real.

        • Deepfakes use an encoder/decoder model, not GANs. Recently the new version of DFL introduced GAN warping, but it is controversial and buggy, and generally introduces unwanted grain. Wikipedia associated GANs with Deepfakes in its Deepfakes article years ago, and no-one has ever had the courage or authority to correct it.
      • by AmiMoJo ( 196126 )

        The best weapon against fake images is image search engines. Simply find the original image and present it as evidence that the fake one has been altered. Same with video.

    • Comment removed based on user account deletion
    • Some people will see an obvious fake and still think it's real if someone they trust convinces them.

      • by gweihir ( 88907 )

        We.. some people believe the mots outrageous and obvious lies if they come from the right person. The part of the human race that can do fact-checking is rather small. The part that does believe fact-checking is not necessary, because they obviously would know is, unfortunately, rather large.

      • Some people will see an obvious fake and still think it's real if someone they trust convinces them.

        Or if it confirms their strongly-held beliefs. Confirmation bias is very powerful.

    • For now we still can't fake voice, so we have that at least for the time being.
    • If something can be used to detect fakery, it can be used to create fakery.

      The actual false truth here, is believing there is a Justice system behind any of this.

      The strength (or weakness) of the technology being used against you (that is why this technology exists) is not measured by the technology itself, but by the Legal system behind it.

      You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.

      • You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.

        Nah.

        You should be asking yourself can a non-dystopian societies survive fake video?

        • You shouldn't be asking yourself how good (or bad) the deepfake is. You should be asking yourself how good (or bad) your lawyer is.

          Nah. You should be asking yourself can a non-dystopian societies survive fake video?

          Society isn't even smart or strong enough to survive fake news, much less fake video.

  • Lossy Compression (Score:4, Interesting)

    by lobiusmoop ( 305328 ) on Friday October 02, 2020 @10:52PM (#60567300) Homepage

    "real videos of people contain physiological signals that are not visible to the eye. "

    Isn't this exactly the sort of information that is dropped in modern lossy video compression? The whole point of lossy compression is to ignore perceptually irrelevant detail.

    • Most of the compression in video happens on a frame by frame basis. This artefact is in the time dimension.
      • The only frame-by-frame video compression format that I know of (granted, I don't know many) is Motion-JPEG, all others formats have at least inter-frame compression as well. The modern ones that people actually use, anyway.

        • all others formats have at least inter-frame compression as well.

          In some instances arguably too much inter-frame compression because it makes seeking far-far-far less efficient. This is a real issue thats been around for awhile so its hard to imagine the person you are replying to knowing anything at all meaningfully significant about the subject given their demonstrable lack of knowledge of todays basic low end video technology.

          ..and yet that ignorant fuck, who lets face it knows damn well that he doesnt know shit about video compression, decided to act like he did.

    • Are they really not visible to the eye? Do they actually produce no changes in the brain? The human eye is an incredibly sensitive instrument, albeit not without its failings.

      An idea of good lossy compression is indeed to sacrifice non-perceptual data as needed to achieve better ratios. But what if it's something that we can perceive subconsciously? And maybe that sort of thing is how people can tell that Zuckerberg isn't human, for example?

  • ... as an adversarial network to train the next deepfake network, making it its own demise.
    • ... as an adversarial network to train the next deepfake network, making it its own demise.

      As long as they keep it secret and don't let it get onto sites like Slashdot where someone might hear about it they'll be fine.

  • "The researchers at SUNY and Intel found that 'biological signals are not coherently preserved in different synthetic facial parts'..."

    Now that the fakers are aware of this problem, they soon will be.

  • And make deep fakes more difficult to detect.

  • I think that deep fakes will remain behind the curve on this one for some time to come. Especially given that a fake needs to remain seen as a fake well into the future. Time belies all lies.

    And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

    Rather than allowing the deep fake software to "do a better fake", they will, in their endeavors to produce a better fake, ultimately end up providing more data by which their work can be fingerprinted and identified. State based actors beware.

    • I think that deep fakes will remain behind the curve on this one for some time to come. Especially given that a fake needs to remain seen as a fake well into the future. Time belies all lies.

      This technology exists for one reason. Now consider just how damn good we've gotten with other single-purpose designs, like guns.

      Point here is your survival as an innocent victim on the worst end of a deepfake accusation, is completely dependent on the ability of your lawyer and your ability to pay for that defense. In this Reality, Time is measured by what you can afford in a Legal system that often puts Justice at such a premium that it's becoming quite out of reach for the common man. (Every 21st Cen

      • by PPH ( 736903 )

        Point here is your survival as an innocent victim on the worst end of a deepfake accusation

        Reasonable doubt defense. When deep fakes get that good, all anyone will have to do is to put forth the idea that the video evidence in question is a deep fake.

        • Point here is your survival as an innocent victim on the worst end of a deepfake accusation

          Reasonable doubt defense. When deep fakes get that good, all anyone will have to do is to put forth the idea that the video evidence in question is a deep fake.

          Yup. And then it's your lawyer against theirs.

          Let's see how smart 12 average people are...good luck.

  • How does it handle thick makeup that some folks and newspeople wear?
    How does it handle facemasks that many wear?

    No where is linked summary does it talk about what device was used to film video that was tested. Nor quality of the video.
    I'd think that cell phones and distance from subject would greatly impair detection.
    They use the phrase "portrait videos" without specifically defining it. Probably requires unmasked, unmakeuped face clearly and closely to camera under good static lighting. Not that common

  • to better catch replicants.
  • With people wearing pancake-make-up, it's much harder.

  • It is fine that such research goes on, to develop new ideas, but its practical applicability is questionable. If I leave out the 90% accuracy, which in itself is not good enough, and the fact that the quality of deep fakes is not only dependent on the quality of the model but also on the amount of resources, which here means computing time(and lower quality models are obviously more detectable, which is probably reflected in the statistics), computer science is developing so fast that these flaws in deep fa

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...