Facebook claims to have developed artificial intelligence that can detect deepfake images and even reverse-engineer them to determine how they were created and even track down their creators.
Deepfakes are completely fake visuals made by artificial intelligence. Facebook AI looks for distinctive patterns such as little speckles of noise or subtle abnormalities in the colour spectrum of an image to see whether they have a shared origin among a collection of deepfakes.
Facebook AI can determine details about how the neural network that formed the image was developed, such as how large the model is or how it was trained, by identifying small fingerprints in an image.
Tal Hassner says at Facebook “I thought there’s no way this is going to work,”.
“How would we, just by looking at a photo, be able to tell how many layers a deep neural network had, or what loss function it was trained with?”
Hassner and his colleagues put the AI to the test on a database of 100,000 deepfake images created by 100 different generative models, each of which produced 1000 images.
Some of the images were utilised to train the model, while others were kept back and provided to the model as unidentified images.
MUST READ: NASA’s Perseverance rover successfully lands on the surface of Mars
This helped the AI achieve its ultimate aim by putting it to the test. “What we’re doing is looking at a photo and estimating what the design of the generative model that made it is, even if we haven’t seen that model before,” Hassner explains.
He wouldn’t specify how accurate the AI’s predictions were, but he did claim that “we’re way better than random.”
Nina Schick, author of Deep Fakes and the Infocalypse, said, “It’s a major step forward for fingerprinting.”
But, as Hassner and his colleagues point out, the AI can only work on totally artificially made images, but many deepfakes are videos created by pasting one face onto another’s body.
MUST READ: Best 10 Platforms to Buy Cryptocurrencies in India
Schick also questions how effective the AI would be if it encountered deepfakes outside of the lab.
“A lot of the face identification models we encounter are based on academic data sets and are used in controlled environments,” she explains.
Hassner wouldn’t reveal how Facebook plans to use its new AI, but he did say that this type of work is a cat-and-mouse game with those who create deepfakes.
“We’re working on improving our identification models, while others are working on improving their generative models,” he explains. “I have no doubt that there will be a method that will completely fool us at some point.”