Deepfakes are used for a variety of nefarious purposes, from disinformation campaigns to inserting people into porn, and the doctored images are increasingly difficult to detect.
A new artificial intelligence tool offers a surprisingly easy way to spot them: look at the light reflected in the eyes.
The system was created by computer scientists at the University of Buffalo. When testing portrait-style photos, the tool was 94% effective at detecting deepfake images.
[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]
The system exposes the fakes by scanning the corneas, which have a mirror-like surface that generates reflective patterns when illuminated with light.
In a photo of a real face taken by a camera, the reflection on both eyes will be similar because they are seeing the same thing. But the Deepfake images synthesized by GANs generally fail to accurately capture this resemblance.
Instead, they often have inconsistencies, such as different geometric shapes or inconsistent locations of reflections.
The AI system looks for these deviations by mapping a face and analyzing the light reflected in each eyeball.
It generates a score that serves as a similarity metric. The smaller the score, the more likely the face is to be a Deepfake.
The system has been shown to be very effective in detecting Deepfakes taken from This Person Does Not Exist, a repository of images created with the StyleGAN2 architecture. However, the study’s authors recognize that it has several limitations.
The most obvious downside to the tool is that it relies on a source of light reflected in both eyes. Inconsistencies in these models can be corrected with manual post-processing, and if an eye is not visible in the image, the method will not work.
It has also only proven its effectiveness on portrait images. If the face in the photo is not looking at the camera, the system will likely produce false positives.
The researchers plan to study these problems to improve the effectiveness of their method. In its current form, it won’t detect the more sophisticated Deepfakes, but it could still spot many of the grosser ones.
You can read the study paper here on the arXiv preprint server.
Published March 11, 2021 – 18:04 UTC