A Novel Multi-Modal Deep Learning Approach for Real-Time Live Event Detection Using Video and Audio Signals Integrating Audio-Visual Fusion for Robust Real-Time Event Recognition

被引:0
作者
Pavadareni, R. [1 ]
Prasina, A. [2 ]
Pandi, Samuthira V. [3 ]
Khrais, Ibrahim Mohammad [4 ]
Jain, Alok [5 ]
Karthikeyan [6 ]
机构
[1] Department of AI&DS, Chennai Institute of Technology, Chennai
[2] Department of ECE, Chennai Institute of Technology, Chennai
[3] Centre for Advanced Wireless Integrated Technology, Chennai Institute of Technology, Chennai
[4] Faculty of Economics and Administrative Sciences-Islamic Banks, Zarqa University, Zarqa
[5] School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara
[6] Department of Electronics and Communication Engineering, Vel Tech Multi Tech Dr.Rangarajan Dr.Sakunthala Engineering College, Avadi - Vel Tech Road, Avadi, Chennai
关键词
concatenation audio-video signals; convolutional neural network (CNN); early fusion; feature fusion; Long Short-Term Memory (LSTM); Mel Frequency Cepstral Coefficients (MFCC); Multi-modality; ResNet (Residual Network);
D O I
10.14569/IJACSA.2025.0160623
中图分类号
学科分类号
摘要
Recent developments in live event detection have primarily focused on single-modal systems, where most applications are based on audio signals. Such methods normally rely on classification approaches involving the Mel-spectrogram. Single-modal systems, though effective in some applications, suffer from severe disadvantages in capturing the complexities of a real-world event, which thereby reduces their reliability in dynamically changing environments. This research study presents a novel multi-modal deep learning approach that combines audio and visual signals in order to enhance the accuracy and robustness of live event detection. The innovation lies in the use of two-stream LSTM pipelines, allowing for temporally consistent modeling of both input modalities while keeping a real-time processing pace through feature-level fusion. Unlike many of the recent transformer models, we are utilizing proven techniques (MFCC, 2D CNN, ResNet and LSTM) in a latency-aware and deployment-friendly architecture suitable for embedded and edge-level event detection. The AVE (Audio Video Events) dataset, consisting of 28 categories, has been used. For the visual modality, video frames undergo feature extraction through a 2D CNN ResNet and temporal analysis through an LSTM. Simultaneously, the audio modality employs MFCC (Mel Frequency Cepstral Coefficients) for feature extraction and LSTM to capture temporal dependencies. The features extracted from both audio and video modalities are concatenated for fusion. The proposed integration leverages the complementary nature of audio and visual inputs to create a more comprehensive framework. The outcome yields 85.19% accuracy in audio and video-based events due to the effective fusion of spatial and temporal cues from diverse modalities, outperforming single-modal baselines (audio-only or video-only models). © (2025), (Science and Information Organization). All rights reserved.
引用
收藏
页码:227 / 240
页数:13
相关论文
共 25 条
[1]  
Smith J., Live event detection using audio signals, Proc. IEEE Int. Conf. Signal Process, pp. 1-5, (2023)
[2]  
Brown A., Et al., Speech command dataset for audio event detection, (2018)
[3]  
Lee C., Kim D., Deep learning innovations in video classification, IEEE Trans. Pattern Anal. Mach. Intell, 46, 3, pp. 1234-1245, (2024)
[4]  
Johnson E., Et al., A survey on feature fusion for multi-modal deep learning, Proc. IEEE Int. Conf. Comput. Vis, pp. 567-572, (2020)
[5]  
Garcia F., NLP for social event classification, Proc. Int. Conf. Nat. Lang. Process, pp. 89-94, (2014)
[6]  
Zhang H., Liu Y., Video event detection using audio-visual fusion, Proc. IEEE Int. Conf. Multimedia Expo, pp. 1-6, (2014)
[7]  
Chen M., Et al., Audio-visual grouplet for temporal event correlation, Proc. IEEE Conf. Comput. Vis. Pattern Recognit, pp. 123-128, (2011)
[8]  
Patel R., Deep learning for video event detection, Proc. IEEE Int. Symp. Multimedia, pp. 45-50, (2015)
[9]  
Kumar S., Nguyen T., Spatiotemporal event detection using CNN-LSTM, Proc. IEEE Int. Conf. Image Process, pp. 789-794, (2017)
[10]  
Wang L., Et al., Multi-modal fusion for aggression detection in public transport, Proc. IEEE Int. Conf. Adv. Video Signal Based Surveill, pp. 1-6, (2019)