Weakly supervised temporal action localization with actionness-guided false positive suppression

被引:1
作者
Li, Zhilin [1 ]
Wang, Zilei [1 ]
Liu, Qinying [1 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Brain Inspired Intelligence Technol, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Weakly supervised learning; Temporal action localization; False positive suppression; Action recognition; Self-training;
D O I
10.1016/j.neunet.2024.106307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weakly supervised temporal action localization aims to locate the temporal boundaries of action instances in untrimmed videos using video-level labels and assign them the corresponding action category. Generally, it is solved by a pipeline called "localization-by-classification", which finds the action instances by classifying video snippets. However, since this approach optimizes the video-level classification objective, the generated activation sequences often suffer interference from class-related scenes, resulting in a large number of false positives in the prediction results. Many existing works treat background as an independent category, forcing models to learn to distinguish background snippets. However, under weakly supervised conditions, the background information is fuzzy and uncertain, making this method extremely difficult. To alleviate the impact of false positives, we propose a new actionness-guided false positive suppression framework. Our method seeks to suppress false positive backgrounds without introducing the background category. Firstly, we propose a self-training actionness branch to learn class-agnostic actionness, which can minimize the interference of class-related scene information by ignoring the video labels. Secondly, we propose a false positive suppression module to mine false positive snippets and suppress them. Finally, we introduce the foreground enhancement module, which guides the model to learn the foreground with the help of the attention mechanism as well as class-agnostic actionness. We conduct extensive experiments on three benchmarks (THUMOS14, ActivityNet1.2, and ActivityNet1.3). The results demonstrate the effectiveness of our method in suppressing false positives and it achieves the state -of -the -art performance. Code: https://github.com/lizhilin-ustc/AFPS.
引用
收藏
页数:12
相关论文
共 69 条
[1]   Diagnosing Error in Temporal Action Detectors [J].
Alwassel, Humam ;
Heilbron, Fabian Caba ;
Escorcia, Victor ;
Ghanem, Bernard .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :264-280
[2]  
[Anonymous], 2008, P 25 INT C MACH LEAR, DOI DOI 10.1145/1390156.1390294
[3]  
Bengio Y., 2013, Advances in neural information processing systems, V26
[4]  
Heilbron FC, 2015, PROC CVPR IEEE, P961, DOI 10.1109/CVPR.2015.7298698
[5]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[6]   Rethinking the Faster R-CNN Architecture for Temporal Action Localization [J].
Chao, Yu-Wei ;
Vijayanarasimhan, Sudheendra ;
Seybold, Bryan ;
Ross, David A. ;
Deng, Jia ;
Sukthankar, Rahul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1130-1139
[7]   Dual-Evidential Learning for Weakly-supervised Temporal Action Localization [J].
Chen, Mengyuan ;
Gao, Junyu ;
Yang, Shicai ;
Xu, Changsheng .
COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 :192-208
[8]   TALLFormer: Temporal Action Localization with a Long-Memory Transformer [J].
Cheng, Feng ;
Bertasius, Gedas .
COMPUTER VISION, ECCV 2022, PT XXXIV, 2022, 13694 :503-521
[9]  
Fu J., 2023, IEEE Transactions on Pattern Analysis and Machine Intelligence
[10]   Fine-grained Temporal Contrastive Learning for Weakly-supervised Temporal Action Localization [J].
Gao, Junyu ;
Chen, Mengyuan ;
Xu, Changsheng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :19967-19977