Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization

被引:50
作者
Lee, Pilhyeon [1 ]
Byun, Hyeran [1 ,2 ]
机构
[1] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
[2] Yonsei Univ, Grad Sch AI, Seoul, South Korea
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ICCV48922.2021.01339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We tackle the problem of localizing temporal intervals of actions with only a single frame label for each action instance for training. Owing to label sparsity, existing work fails to learn action completeness, resulting in fragmentary action predictions. In this paper, we propose a novel framework, where dense pseudo-labels are generated to provide completeness guidance for the model. Concretely, we first select pseudo background points to supplement point-level action labels. Then, by taking the points as seeds, we search for the optimal sequence that is likely to contain complete action instances while agreeing with the seeds. To learn completeness from the obtained sequence, we introduce two novel losses that contrast action instances with background ones in terms of action score and feature similarity, respectively. Experimental results demonstrate that our completeness guidance indeed helps the model to locate complete action instances, leading to large performance gains especially under high IoU thresholds. Moreover, we demonstrate the superiority of our method over existing state-of-the-art methods on four benchmarks: THUMOS'14, GTEA, BEOID, and ActivityNet. Notably, our method even performs comparably to recent fully-supervised methods, at the 6x cheaper annotation cost. Our code is available at https://github.com/Pilhyeon.
引用
收藏
页码:13628 / 13637
页数:10
相关论文
共 50 条
[31]   Deep Motion Prior for Weakly-Supervised Temporal Action Localization [J].
Cao, Meng ;
Zhang, Can ;
Chen, Long ;
Shou, Mike Zheng ;
Zou, Yuexian .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :5203-5213
[32]   Adaptive Mutual Supervision for Weakly-Supervised Temporal Action Localization [J].
Ju, Chen ;
Zhao, Peisen ;
Chen, Siheng ;
Zhang, Ya ;
Zhang, Xiaoyun ;
Wang, Yanfeng ;
Tian, Qi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :6688-6701
[33]   Dynamic Graph Modeling for Weakly-Supervised Temporal Action Localization [J].
Shi, Haichao ;
Zhang, Xiao-Yu ;
Li, Changsheng ;
Gong, Lixing ;
Li, Yong ;
Bao, Yongjun .
MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, :3820-3828
[34]   Weakly-Supervised Temporal Action Localization with Regional Similarity Consistency [J].
Ren, Haoran ;
Ren, Hao ;
Lu, Hong ;
Jin, Cheng .
MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 :69-81
[35]   Background-Aware Robust Context Learning for Weakly-Supervised Temporal Action Localization [J].
Kim, Jinah ;
Cho, Jungchan .
IEEE ACCESS, 2022, 10 :65315-65325
[36]   Actionness Inconsistency-Guided Contrastive Learning for Weakly-Supervised Temporal Action Localization [J].
Li, Zhilin ;
Wang, Zilei ;
Liu, Qinying .
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, :1513-1521
[37]   Proposal-based Multiple Instance Learning for Weakly-supervised Temporal Action Localization [J].
Ren, Huan ;
Yang, Wenfei ;
Zhang, Tianzhu ;
Zhang, Yongdong .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :2394-2404
[38]   Cascade Evidential Learning for Open-world Weakly-supervised Temporal Action Localization [J].
Chen, Mengyuan ;
Gao, Junyu ;
Xu, Changsheng .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :14741-14750
[39]   Weakly-Supervised Temporal Action Localization via Cross-Stream Collaborative Learning [J].
Ji, Yuan ;
Jia, Xu ;
Lu, Huchuan ;
Ruan, Xiang .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :853-861
[40]   Spatial-temporal correlations learning and action-background jointed attention for weakly-supervised temporal action localization [J].
Xia, Huifen ;
Zhan, Yongzhao ;
Cheng, Keyang .
MULTIMEDIA SYSTEMS, 2022, 28 (04) :1529-1541