TY - GEN
T1 - Can Deepfakes be created on a whim?
AU - Mehta, Pulak
AU - Jagatap, Gauri
AU - Gallagher, Kevin
AU - Timmerman, Brian
AU - Deb, Progga
AU - Garg, Siddharth
AU - Greenstadt, Rachel
AU - Dolan-Gavitt, Brendon
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/4/30
Y1 - 2023/4/30
N2 - Recent advancements in machine learning and computer vision have led to the proliferation of Deepfakes. As technology democratizes over time, there is an increasing fear that novice users can create Deepfakes, to discredit others and undermine public discourse. In this paper, we conduct user studies to understand whether participants with advanced computer skills and varying level of computer science expertise can create Deepfakes of a person saying a target statement using limited media files. We conduct two studies; in the first study (n = 39) participants try creating a target Deepfake in a constrained time frame using any tool they desire. In the second study (n = 29) participants use pre-specified deep learning based tools to create the same Deepfake. We find that for the first study, of the participants successfully created complete Deepfakes with audio and video, whereas for the second user study, of the participants were successful in stitching target speech to the target video. We further use Deepfake detection software tools as well as human examiner-based analysis, to classify the successfully generated Deepfake outputs as fake, suspicious, or real. The software detector classified of the Deepfakes as fake, whereas the human examiners classified of the videos as fake. We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners.
AB - Recent advancements in machine learning and computer vision have led to the proliferation of Deepfakes. As technology democratizes over time, there is an increasing fear that novice users can create Deepfakes, to discredit others and undermine public discourse. In this paper, we conduct user studies to understand whether participants with advanced computer skills and varying level of computer science expertise can create Deepfakes of a person saying a target statement using limited media files. We conduct two studies; in the first study (n = 39) participants try creating a target Deepfake in a constrained time frame using any tool they desire. In the second study (n = 29) participants use pre-specified deep learning based tools to create the same Deepfake. We find that for the first study, of the participants successfully created complete Deepfakes with audio and video, whereas for the second user study, of the participants were successful in stitching target speech to the target video. We further use Deepfake detection software tools as well as human examiner-based analysis, to classify the successfully generated Deepfake outputs as fake, suspicious, or real. The software detector classified of the Deepfakes as fake, whereas the human examiners classified of the videos as fake. We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners.
KW - deepfakes
KW - generative models
KW - video synthesis
UR - http://www.scopus.com/inward/record.url?scp=85159597852&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85159597852&partnerID=8YFLogxK
U2 - 10.1145/3543873.3587581
DO - 10.1145/3543873.3587581
M3 - Conference contribution
AN - SCOPUS:85159597852
T3 - ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023
SP - 1324
EP - 1334
BT - ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023
PB - Association for Computing Machinery, Inc
T2 - 2023 World Wide Web Conference, WWW 2023
Y2 - 30 April 2023 through 4 May 2023
ER -