Integrating Spatial and Temporal Contextual Information for Improved Video Visualization

被引:0
|
作者
Singh, Pratibha [1 ]
Kushwaha, Alok Kumar Singh [1 ]
机构
[1] Guru Ghasidas Vishwavidyalaya, Dept Comp Sci & Engn, Bilaspur, India
来源
FOURTH CONGRESS ON INTELLIGENT SYSTEMS, VOL 2, CIS 2023 | 2024年 / 869卷
关键词
Video visualization; Moment retrieval; Highlights detection; Self-attention network;
D O I
10.1007/978-981-99-9040-5_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video representation learning is crucial for various tasks, and self-attention has emerged as an effective technique for capturing long-range dependencies. However, existing methods often neglect the distinct contextual information conveyed by spatial and temporal correlations by computing pairwise correlations simultaneously along both dimensions. To address this limitation, we suggest a novel module that sequentially models spatial and temporal correlations. This enables the efficient integration of spatial contexts into temporal modeling. By incorporating this module into a 2D CNN, we develop a self-attention module network tailored for video visualization. We evaluate the effectiveness of our approach on two benchmark datasets: Charades STA and QVHighlight, which are relevant for moment retrieval and highlight detection tasks. Through extensive experimentation, our findings show that on both datasets, the self-attention element network exceeds current methods. Notably, our models consistently surpass shallower networks and those with fewer modalities, highlighting the superiority of our approach. In summary, our proposed self-attention module contributes to advancing video representation learning by effectively capturing spatial and temporal correlations. The notable improvements achieved in moment retrieval and highlight detection tasks validate the efficacy and versatility of our approach.
引用
收藏
页码:415 / 424
页数:10
相关论文
empty
未找到相关数据