Explainable Deep-Fake Detection Using Visual Interpretability Methods

被引:26
作者
Malolan, Badhrinarayan [1 ]
Parekh, Ankit [1 ]
Kazi, Faruk [1 ]
机构
[1] Veermata Jijabai Technol Inst, Ctr Excellence CoE Complex & Nonlinear Dynam Syst, Mumbai, Maharashtra, India
来源
2020 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMPUTER TECHNOLOGIES (ICICT 2020) | 2020年
关键词
deep-fakes; deep-fake detection; faceswap; interpretability; explainable AI (XAI); LRP; LIME;
D O I
10.1109/ICICT50521.2020.00051
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.
引用
收藏
页码:289 / 293
页数:5
相关论文
共 12 条
[1]  
Afchar D, 2018, IEEE INT WORKS INFOR
[2]  
[Anonymous], 2014, ARXIV13126034V2
[3]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[4]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[5]  
Güera D, 2018, 2018 15TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), P127
[6]  
Gunning D., 2016, DARPA EXPLAINABLE AR
[7]  
Li Y., 2019, ARXIV181100656V3
[8]  
Morch N.J.S., 1995, P ICNN 95 INT C NEUR
[9]   "Why Should I Trust You?" Explaining the Predictions of Any Classifier [J].
Ribeiro, Marco Tulio ;
Singh, Sameer ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :1135-1144
[10]  
Rossler A., 2019, ARXIV190108971V3