Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes

被引:46
作者
Loc Trinh [1 ]
Tsang, Michael [1 ]
Rambhatla, Sirisha [1 ]
Liu, Yan [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90089 USA
来源
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021) | 2021年
基金
美国国家科学基金会;
关键词
D O I
10.1109/WACV48630.2021.00202
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose a novel human-centered approach for detecting forgery in face images, using dynamic prototypes as a form of visual explanations. Currently, most state-of-the-art deepfake detections are based on black-box models that process videos frame-by-frame for inference, and few closely examine their temporal inconsistencies. However, the existence of such temporal artifacts within deepfake videos is key in detecting and explaining deepfakes to a supervising human. To this end, we propose Dynamic Prototype Network (DPNet) - an interpretable and effective solution that utilizes dynamic representations (i.e., prototypes) to explain deepfake temporal artifacts. Extensive experimental results show that DPNet achieves competitive predictive performance, even on unseen testing datasets such as Google's DeepFakeDetection, DeeperForensics, and Celeb-DE while providing easy referential explanations of deepfake dynamics. On top of DPNet 's prototypical framework, we further formulate temporal logic specifications based on these dynamics to check our model's compliance to desired temporal behaviors, hence providing trustworthiness for such critical detection systems.
引用
收藏
页码:1972 / 1982
页数:11
相关论文
共 74 条
[21]   SlowFast Networks for Video Recognition [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Malik, Jitendra ;
He, Kaiming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6201-6210
[22]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[23]  
Goodfellow Ian J, 2015, PROC INT C LEARN REP
[24]  
Gowal S., 2018, On the effectiveness of interval bound propagation for training verifiably robust models
[25]  
Hao K., 2019, MIT Technology ReviewJune 6,
[26]   Safety Verification of Deep Neural Networks [J].
Huang, Xiaowei ;
Kwiatkowska, Marta ;
Wang, Sen ;
Wu, Min .
COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 :3-29
[27]  
Ingram David, 2019, NBC
[28]   DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection [J].
Jiang, Liming ;
Li, Ren ;
Wu, Wayne ;
Qian, Chen ;
Loy, Chen Change .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2886-2895
[29]  
KATZ G, 2017, ARXIVABS170201135
[30]  
Khodabakhsh A, 2018, 2018 INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG)