Deepfake detectors can be defeated, computer scientists show for the first time
by University of California – San Diego
Posted April 4, 2021
Systems designed to detect deepfakes –videos that manipulate real-life footage via artificial intelligence–can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.
Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. “”If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, the paper’s other first coauthor and a UC San Diego computer science student. More…