Exploiting semantic-level affinities with a mask-guided network for temporal action proposal in videos

被引:0
作者
Yu Yang
Mengmeng Wang
Jianbiao Mei
Yong Liu
机构
[1] Zhejiang University,Institute of Cyber
来源
Applied Intelligence | 2023年 / 53卷
关键词
Temporal action proposal generation; Temporal action localization; Attention; Transformer;
D O I
暂无
中图分类号
学科分类号
摘要
Temporal action proposal (TAP) aims to detect the action instances’ starting and ending times in untrimmed videos, which is fundamental and critical for large-scale video analysis and human action understanding. The main challenge of the temporal action proposal lies in modeling representative temporal relations in long untrimmed videos. Existing state-of-the-art methods achieve temporal modeling by building local-level, proposal-level, or global-level temporal dependencies. Local methods lack a wider receptive field, while proposal and global methods lack the focalization of learning action frames and contain background distractions. In this paper, we propose that learning semantic-level affinities can capture more practical information. Specifically, by modeling semantic associations between frames and action units, action segments (foregrounds) can aggregate supportive cues from other co-occurring actions, and nonaction clips (backgrounds) can learn the discriminations between them and action frames. To this end, we propose a novel framework named the Mask-Guided Network (MGNet) to build semantic-level temporal associations for the TAP task. Specifically, we first propose a Foreground Mask Generation (FMG) module to adaptively generate the foreground mask, representing the locations of the action units throughout the video. Second, we design a Mask-Guided Transformer (MGT) by exploiting the foreground mask to guide the self-attention mechanism to focus on and calculate semantic affinities with the foreground frames. Finally, these two modules are jointly explored in a unified framework. MGNet models the intra-semantic similarities for foregrounds, extracting supportive action cues for boundary refinement; it also builds the inter-semantic distances for backgrounds, providing the semantic gaps to suppress false positives and distractions. Extensive experiments are conducted on two challenging datasets, ActivityNet-1.3 and THUMOS14, and the results demonstrate that our method achieves superior performance.
引用
收藏
页码:15516 / 15536
页数:20
相关论文
共 83 条
[31]  
Jiang X(undefined)undefined undefined undefined undefined-undefined
[32]  
Fang Z(undefined)undefined undefined undefined undefined-undefined
[33]  
Gao Y(undefined)undefined undefined undefined undefined-undefined
[34]  
Fujita H(undefined)undefined undefined undefined undefined-undefined
[35]  
Xia K(undefined)undefined undefined undefined undefined-undefined
[36]  
Wang L(undefined)undefined undefined undefined undefined-undefined
[37]  
Zhou S(undefined)undefined undefined undefined undefined-undefined
[38]  
Hua G(undefined)undefined undefined undefined undefined-undefined
[39]  
Tang W(undefined)undefined undefined undefined undefined-undefined
[40]  
Xu J(undefined)undefined undefined undefined undefined-undefined