ABN: Agent-Aware Boundary Networks for Temporal Action Proposal Generation

被引:13
作者
Vo, Khoa [1 ]
Yamazaki, Kashu [1 ]
Truong, Sang [1 ]
Tran, Minh-Triet [2 ,4 ]
Sugimoto, Akihiro [3 ]
Le, Ngan [1 ]
机构
[1] Univ Arkansas, AICV Lab, Fayetteville, AR 72703 USA
[2] Univ Sci, VNU HCM, Ho Chi Minh City 700000, Vietnam
[3] Natl Inst Informat NII, Tokyo 1018430, Japan
[4] Vietnam Natl Univ, Ho Chi Minh City 700000, Vietnam
基金
美国国家科学基金会;
关键词
Proposals; Videos; Feature extraction; Visualization; Three-dimensional displays; Task analysis; Semantics; Temporal action proposal generation; temporal action detection; agent-aware boundary network; DENSE;
D O I
10.1109/ACCESS.2021.3110973
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Temporal action proposal generation (TAPG) aims to estimate temporal intervals of actions in untrimmed videos, which is a challenging yet plays an important role in many tasks of video analysis and understanding. Despite the great achievement in TAPG, most existing works ignore the human perception of interaction between agents and the surrounding environment by applying a deep learning model as a black-box to the untrimmed videos to extract video visual representation. Therefore, it is beneficial and potentially improves the performance of TAPG if we can capture these interactions between agents and the environment. In this paper, we propose a novel framework named Agent-Aware Boundary Network (ABN), which consists of two sub-networks: (1) an Agent-Aware Representation Network to obtain both agent-agent and agents-environment relationships in the video representation; and (2) a Boundary Generation Network to estimate the confidence score of temporal intervals. In the Agent-Aware Representation Network, the interactions between agents are expressed through local pathway, which operates at a local level to focus on the motions of agents whereas the overall perception of the surroundings are expressed through global pathway, which operates at a global level to perceive the effects of agents-environment. Comprehensive evaluations on 20-action THUMOS-14 and 200-action ActivityNet-1.3 datasets with different backbone networks (i.e C3D, SlowFast and Two-Stream) show that our proposed ABN robustly outperforms state-of-the-art methods regardless of the employed backbone network on TAPG. We further examine the proposal quality by leveraging proposals generated by our method onto temporal action detection (TAD) frameworks and evaluate their detection performances.
引用
收藏
页码:126431 / 126445
页数:15
相关论文
共 64 条
  • [41] Ng JYH, 2015, PROC CVPR IEEE, P4694, DOI 10.1109/CVPR.2015.7299101
  • [42] Peisen Zhao, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12353), P539, DOI 10.1007/978-3-030-58598-3_32
  • [43] Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks
    Qiu, Zhaofan
    Yao, Ting
    Mei, Tao
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5534 - 5542
  • [44] Redmon J, 2018, Arxiv, DOI arXiv:1804.02767
  • [45] Ren S., 2015, PROC ADVNEURAL INF P, P91
  • [46] Temporal Action Detection using a Statistical Language Model
    Richard, Alexander
    Gall, Juergen
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3131 - 3140
  • [47] CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos
    Shou, Zheng
    Chan, Jonathan
    Zareian, Alireza
    Miyazawa, Kazuyuki
    Chang, Shih-Fu
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1417 - 1426
  • [48] Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs
    Shou, Zheng
    Wang, Dongang
    Chang, Shih-Fu
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1049 - 1058
  • [49] Simonyan K, 2014, ADV NEUR IN, V27
  • [50] Su H., 2020, AAAI CONF ARTIF INTE, P2602