Convolutional long short-term memory-based approach for deepfakes detection from videos

被引:5
作者
Nawaz, Marriam [1 ]
Javed, Ali [1 ]
Irtaza, Aun [2 ]
机构
[1] UET Taxila, Dept Software Engn, Taxila 47050, Punjab, Pakistan
[2] UET Taxila, Dept Comp Sci, Taxila 47050, Punjab, Pakistan
关键词
CNN; Deepfakes; Bi-LSTM; Deep learning; Multimedia forensic; SALIENCY DETECTION; IMAGES;
D O I
10.1007/s11042-023-16196-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The great development in the area of Artificial Intelligence (AI) has introduced tremendous advancements in information technology. Moreover, the introduction of lightweight machine learning (ML) techniques allows the applications to work with limited storage and processing power. Deepfakes is among the most famous type of such applications of this era which generates a large amount of fake and modified audiovisual data. The creation of such fake data has introduced a serious risk to the security and confidentiality of humans all around the globe. Accurate detection and classification of actual and deepfakes content is a challenging task due to the progression of Generative adversarial networks (GANs) which produce such convincing manipulated content that it's impossible for people to recognize it through their naked eyes. In this work, we have presented deep learning (DL)-based approach namely the convolutional long short-term memory (C-LSTM) method for deepfakes detection from videos. More specifically, the spatial information from the input sample is calculated by employing various pre-trained models like VGG16, VGG19, ResNet50, XceptionNet, and GoogleNet, DenseNet. Further, we have proposed a novel feature descriptor called the Dense-Swish-Net121. Whereas the Bi-LSTM model is utilized to compute the temporal information. Lastly, the results are predicted based on both the frame level and temporal level information to make the final decision. A detailed comparison of all CNN models with the Bi-LSTM approach is performed and has confirmed through the reported results that the proposed Dense-Swish-Net121 with Bi-LSTM approach performs well for deepfakes detection.
引用
收藏
页码:16977 / 17000
页数:24
相关论文
共 49 条
[31]   Image Authenticity Detection Using DWT and Circular Block-Based LTrP Features [J].
Nawaz, Marriam ;
Mehmood, Zahid ;
Nazir, Tahira ;
Masood, Momina ;
Tariq, Usman ;
Munshi, Asmaa Mahdi ;
Mehmood, Awais ;
Rashid, Muhammad .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 69 (02) :1927-1944
[32]   Single and multiple regions duplication detections in digital images with applications in image forensic [J].
Nawaz, Marriam ;
Mehmood, Zahid ;
Bilal, Muhammad ;
Munshi, Asmaa Mahdi ;
Rashid, Muhammad ;
Yousaf, Rehan Mehmood ;
Rehman, Amjad ;
Saba, Tanzila .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 40 (06) :10351-10371
[33]   Melanoma localization and classification through faster region-based convolutional neural network and SVM [J].
Nawaz, Marriam ;
Masood, Momina ;
Javed, Ali ;
Iqbal, Javed ;
Nazir, Tahira ;
Mehmood, Awais ;
Ashraf, Rehan .
MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (19) :28953-28974
[34]  
Nazir Tahira, 2021, Proceedings of 2021 International Conference on Artificial Intelligence (ICAI), P33, DOI 10.1109/ICAI52203.2021.9445228
[35]   Copy move forgery detection and segmentation using improved mask region-based convolution network (RCNN) [J].
Nazir, Tahira ;
Nawaz, Marriam ;
Masood, Momina ;
Javed, Ali .
APPLIED SOFT COMPUTING, 2022, 131
[36]  
Nguyen T. T., 2019, ARXIV
[37]   Improved Generalizability of Deep-Fakes Detection Using Transfer Learning Based CNN Framework [J].
Ranjan, Pranjal ;
Patil, Sarvesh ;
Kazi, Faruk .
2020 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMPUTER TECHNOLOGIES (ICICT 2020), 2020, :86-90
[38]  
Roy R, 2022, 3D CNN ARCHITECTURES
[39]  
Selvaraju RR, 2020, INT J COMPUT VISION, V128, P336, DOI [10.1109/ICCV.2017.74, 10.1007/s11263-019-01228-7]
[40]   A comprehensive survey on passive techniques for digital video forgery detection [J].
Shelke, Nitin Arvind ;
Kasana, Singara Singh .
MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (04) :6247-6310