Auguring Fake Face Images Using Dual Input Convolution Neural Network

被引:6
作者
Bhandari, Mohan [1 ]
Neupane, Arjun [2 ]
Mallik, Saurav [3 ,4 ]
Gaur, Loveleen [5 ,6 ,7 ]
Qin, Hong [8 ]
机构
[1] Samriddhi Coll, Dept Sci & Technol, Bhaktapur 44800, Nepal
[2] Cent Queensland Univ, Sch Engn & Technol, Norman Gardens, Rockhampton, Qld 4701, Australia
[3] Harvard Univ, Sch Publ Hlth, Dept Environm Hlth, Boston, MA 02115 USA
[4] Univ Arizona, Tucson, AZ 85721 USA
[5] Amity Univ, Amity Int Business Sch, Noida 201303, India
[6] Taylor Univ, Sch Comp Sci, Subang Jaya 47500, Malaysia
[7] Univ South Pacific, Grad Sch Business, Suva 1168, Fiji
[8] Univ Tennessee, Dept Comp Sci & Engn, Chattanooga, TN 37996 USA
关键词
Convolutional Neural Network (CNN); deepfakes; face detection; SHAP; XAI;
D O I
10.3390/jimaging9010003
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Deepfake technology uses auto-encoders and generative adversarial networks to replace or artificially construct fine-tuned faces, emotions, and sounds. Although there have been significant advancements in the identification of particular fake images, a reliable counterfeit face detector is still lacking, making it difficult to identify fake photos in situations with further compression, blurring, scaling, etc. Deep learning models resolve the research gap to correctly recognize phony images, whose objectionable content might encourage fraudulent activity and cause major problems. To reduce the gap and enlarge the fields of view of the network, we propose a dual input convolutional neural network (DICNN) model with ten-fold cross validation with an average training accuracy of 99.36 +/- 0.62, a test accuracy of 99.08 +/- 0.64, and a validation accuracy of 99.30 +/- 0.94. Additionally, we used 'SHapley Additive exPlanations (SHAP) ' as explainable AI (XAI) Shapely values to explain the results and interoperability visually by imposing the model into SHAP. The proposed model holds significant importance for being accepted by forensics and security experts because of its distinctive features and considerably higher accuracy than state-of-the-art methods.
引用
收藏
页数:11
相关论文
共 54 条
[1]  
Bachmaier Winter L., 2022, INVESTIGATING PREVEN, P3
[2]  
Bhandari M., 2022, RECENT TRENDS IMAGE, P156
[3]  
Bhandari M., 2022, 2022 2 INT C INNOVAT, VVolume 2, P429, DOI [10.1109/ICIPTM54933.2022.9753917, DOI 10.1109/ICIPTM54933.2022.9753917]
[4]   Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI [J].
Bhandari, Mohan ;
Shahi, Tej Bahadur ;
Siku, Birat ;
Neupane, Arjun .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
[5]  
Boulahia H., 2022, KAGGLE
[6]   Performance Analysis of Google Colaboratory as a Tool for Accelerating Deep Learning Applications [J].
Carneiro, Tiago ;
Medeiros Da Nobrega, Raul Victor ;
Nepomuceno, Thiago ;
Bian, Gui-Bin ;
De Albuquerque, Victor Hugo C. ;
Reboucas Filho, Pedro Pedrosa .
IEEE ACCESS, 2018, 6 :61677-61685
[7]   Using a Dual-Input Convolutional Neural Network for Automated Detection of Pediatric Supracondylar Fracture on Conventional Radiography [J].
Choi, Jae Won ;
Cho, Yeon Jin ;
Lee, Seowoo ;
Lee, Jihyuk ;
Lee, Seunghyun ;
Choi, Young Hun ;
Cheon, Jung-Eun ;
Ha, Ji Young .
INVESTIGATIVE RADIOLOGY, 2020, 55 (02) :101-110
[8]   On the Detection of Digital Face Manipulation [J].
Dang, Hao ;
Liu, Feng ;
Stehouwer, Joel ;
Liu, Xiaoming ;
Jain, Anil K. .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5780-5789
[9]   Multi-input CNN-GRU based human activity recognition using wearable sensors [J].
Dua, Nidhi ;
Singh, Shiva Nand ;
Semwal, Vijay Bhaskar .
COMPUTING, 2021, 103 (07) :1461-1478
[10]  
Durall R., 2019, ARXIV