Hybrid Deep Learning Model Based on GAN and RESNET for Detecting Fake Faces

被引:5
作者
Safwat, Soha [1 ]
Mahmoud, Ayat [2 ]
Eldesouky Fattoh, Ibrahim [3 ]
Ali, Farid [4 ]
机构
[1] Egyptian Chinese Univ, Fac Engn & Technol, Software Engn & Informat Technol Dept, Cairo 4541312, Egypt
[2] MSA Univ, Fac Comp Sci, Dept Comp Sci, Cairo 3750311, Egypt
[3] Beni Suef Univ, Fac Comp & Artificial Intelligence, Dept Comp Sci, Bani Suwayf, Egypt
[4] Beni Suef Univ, Fac Comp & Artificial Intelligence, Dept Informat Technol, Bani Suwayf, Egypt
关键词
RESNET; generative adversarial networks; deep learning; real and fake faces; face detection; channel-wise attention;
D O I
10.1109/ACCESS.2024.3416910
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While human brains have the ability to distinguish face characteristics, the use of advanced technology and artificial intelligence blurs the difference between actual and modified images. The evolution of digital editing applications has led to the fabrication of very lifelike false faces, making it harder for humans to discriminate between real and made ones. Because of this, techniques like deep learning are being used increasingly to distinguish between real and artificial faces,producing more consistent and accurate results. In order to detect fraudulent faces, This paper introduces a pioneering hybrid deep learning model, which merges the capabilities of Generative Adversarial Networks (GANs) and the Residual Neural Network (RESNET) architecture, aimed at detecting fake faces. By integrating GANs' generative strength with RESNET's discriminative abilities, the proposed model offers a novel approach to discerning real from artificial faces. Through a comparative analysis, the performance of the hybrid model is evaluated against established pre-trained models such as VGG16 and RESNET 50. Results demonstrate the superior effectiveness of the hybrid model in accurately detecting fake faces, marking a notable advancement in facial image recognition and authentication. The findings on a benchmark dataset show that the proposed model obtains outstanding performance measures, including precision 0.79, recall 0.88, F1-score 0.83, accuracy 0.83, and ROC AUC Score 0.825. The study's conclusions highlight the hybrid model's strong performance in identifying fake faces, especially when it comes to accuracy, precision, and memory economy. By combining the generative capacity of GANs with the discriminative capabilities of RESNET, this solves the problems caused by more complex fake face generation approaches.With significant potential for use in identity verification, social media content moderation, cybersecurity, and other areas, the study seeks to advance the field of false face identification. In these situations, being able to accurately discriminate between real and altered faces is crucial. Notably, our suggested model adds Channel-Wise Attention Mechanisms to RESNET50 at the feature extraction phase, which increases its effectiveness and boosts its overall performance.
引用
收藏
页码:86391 / 86402
页数:12
相关论文
共 32 条
[1]  
AbdElminaam D.S., 2023, Journal of Computing and Communication, P31
[2]  
Almars AM, 2021, Journal of Computer and Communications, V09, P20, DOI [10.4236/jcc.2021.95003, 10.4236/jcc.2021.95003, DOI 10.4236/JCC.2021.95003]
[3]   Multiclass AI-Generated Deepfake Face Detection Using Patch-Wise Deep Learning Model [J].
Arshed, Muhammad Asad ;
Mumtaz, Shahzad ;
Ibrahim, Muhammad ;
Dewi, Christine ;
Tanveer, Muhammad ;
Ahmed, Saeed .
COMPUTERS, 2024, 13 (01)
[4]  
Bayar Belhassen., 2016, P 4 ACM WORKSHOP INF, P5, DOI [10.1145/2909827.2930786, DOI 10.1145/2909827.2930786]
[5]  
Chang X, 2020, CHIN CONTR CONF, P7252, DOI [10.23919/ccc50068.2020.9189596, 10.23919/CCC50068.2020.9189596]
[6]  
Chauhan R., 2023, PREPRINT
[7]  
Chen B., 2023, Pattern Recog-nit., V139
[8]   Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security [J].
Chesney, Bobby ;
Citron, Danielle .
CALIFORNIA LAW REVIEW, 2019, 107 (06) :1753-1819
[9]  
Clarke J., 2023, P CHI C HUM FACT COM, P1
[10]  
Do N.-T., 2018, ISITC, V2018, P376