StochasticFormer: Stochastic Modeling for Weakly Supervised Temporal Action Localization

被引:5
作者
Shi, Haichao [1 ]
Zhang, Xiao-Yu [1 ]
Li, Changsheng [2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100193, Peoples R China
[2] Beijing Inst Technol, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Location awareness; Stochastic processes; Feature extraction; Videos; Transformers; Training; Annotations; Temporal action localization; action recognition; stochastic process;
D O I
10.1109/TIP.2023.3244411
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weakly supervised temporal action localization (WS-TAL) aims to identify the time intervals corresponding to actions of interest in untrimmed videos with video-level weak supervision. For most existing WS-TAL methods, two commonly encountered challenges are under-localization and over-localization, which inevitably bring about severe performance deterioration. To address the issues, this paper proposes a transformer-structured stochastic process modeling framework, namely StochasticFormer, to fully investigate finer-grained interactions among the intermediate predictions to achieve further refined localization. StochasticFormer is built on a standard attention-based pipeline to derive preliminary frame/snippet-level predictions. Then, the pseudo localization module generates variable-length pseudo action instances with the corresponding pseudo labels. Using the pseudo "action instance - action category " pairs as fine-grained pseudo supervision, the stochastic modeler aims to learn the underlying interaction among the intermediate predictions with an encoder-decoder network. The encoder consists of the deterministic and latent path to capture the local and global information, which are subsequently integrated by the decoder to obtain reliable predictions. The framework is optimized with three carefully designed losses, i.e. the video-level classification loss, the frame-level semantic coherence loss, and the ELBO loss. Extensive experiments on two benchmarks, i.e., THUMOS14 and ActivityNet1.2, have shown the efficacy of StochasticFormer compared with the state-of-the-art methods.
引用
收藏
页码:1379 / 1389
页数:11
相关论文
共 53 条
  • [11] Garnelo M, 2018, Arxiv, DOI [arXiv:1807.01622, 10.48550/arXiv.1807.01622, DOI 10.48550/ARXIV.1807.01622]
  • [12] Video Action Transformer Network
    Girdhar, Rohit
    Carreira, Joao
    Doersch, Carl
    Zisserman, Andrew
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 244 - 253
  • [13] Heng Wang, 2011, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P3169, DOI 10.1109/CVPR.2011.5995407
  • [14] Islam A, 2021, AAAI CONF ARTIF INTE, V35, P1637
  • [15] Islam A, 2020, IEEE WINT CONF APPL, P536, DOI [10.1109/WACV45572.2020.9093620, 10.1109/wacv45572.2020.9093620]
  • [16] Jain M, 2015, PROC CVPR IEEE, P46, DOI 10.1109/CVPR.2015.7298599
  • [17] Jiang Y.-G., 2014, THUMOS Challenge: Action Recognition with a Large Number of Classes
  • [18] Kim H., 2019, ICLR
  • [19] King DB, 2015, ACS SYM SER, V1214, P1
  • [20] Lee P, 2020, AAAI CONF ARTIF INTE, V34, P11320