Explainable AI for DeepFake Detection

被引:2
作者
Mansoor, Nazneen [1 ]
Iliev, Alexander I. [1 ,2 ]
机构
[1] SRH Berlin Univ Appl Sci, Berlin Sch Technol, D-10587 Berlin, Germany
[2] Bulgarian Acad Sci, Inst Math & Informat, Sofia 1113, Bulgaria
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 02期
关键词
explainable artificial intelligence; deep learning; deepfake detection; explainability; convolutional neural network; VGG-16; ResNet-50; inception V3; network dissection; face dictionary; model interpretability;
D O I
10.3390/app15020725
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The surge in technological advancements has resulted in concerns over its misuse in politics and entertainment, making reliable detection methods essential. This study introduces a deepfake detection technique that enhances interpretability using the network dissection algorithm. This research consists of two stages: (1) detection of forged images using advanced convolutional neural networks such as ResNet-50, Inception V3, and VGG-16, and (2) applying the network dissection algorithm to understand the models' internal decision-making processes. The CNNs' performance is evaluated through F1-scores ranging from 0.8 to 0.9, demonstrating their effectiveness. By analyzing the facial features learned by the models, this study provides explainable results for classifying images as real or fake. This interpretability is crucial in understanding how deepfake detection models operate. Although numerous detection models exist, they often lack transparency in their decision-making processes. This research fills that gap by offering insights into how these models distinguish real from manipulated images. The findings highlight the importance of interpretability in deep neural networks, providing a better understanding of their hierarchical structures and decision processes.
引用
收藏
页数:21
相关论文
共 24 条
[1]   Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods [J].
Abir, Wahidul Hasan ;
Khanam, Faria Rahman ;
Alam, Kazi Nabiul ;
Hadjouni, Myriam ;
Elmannai, Hela ;
Bourouis, Sami ;
Dey, Rajesh ;
Khan, Mohammad Monirujjaman .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 35 (02) :2151-2169
[2]  
Baldassarre F., 2022, P BRIT MACH VIS C LO
[3]   Network Dissection: Quantifying Interpretability of Deep Visual Representations [J].
Bau, David ;
Zhou, Bolei ;
Khosla, Aditya ;
Oliva, Aude ;
Torralba, Antonio .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3319-3327
[4]  
Bento V, 2021, Discov Artif Intell, V1, P9, DOI [DOI 10.1007/S44163-021-00008-Y, 10.1007/s44163-021-00008]
[5]  
Daglarli E., 2020, Advances and Applications in Deep Learning
[6]  
Dong S., 2022, Lecture Notes in Computer Science
[7]  
Holzinger A., 2020, Lecture Notes in Computer Science
[8]  
Ignatova D., 2015, P 17 INT BALK SOC PE, P207
[9]  
Ignatova D., 2021, TRAKIA J SCI, V19, P867, DOI DOI 10.15547/tjs.2021.s.01.136
[10]  
Ignatova D., 2023, Smart Inov. Recreat. Wellness Ind. Niche Tour. Sci. J, V5, P22