End-to-End Temporal Action Detection With Transformer

被引:107
作者
Liu, Xiaolong [1 ]
Wang, Qimeng [1 ]
Hu, Yao [2 ]
Tang, Xu [2 ]
Zhang, Shiwei [3 ]
Bai, Song [4 ]
Bai, Xiang [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Alibaba Grp, Beijing 100102, Peoples R China
[3] Alibaba Grp, Hangzhou 311121, Peoples R China
[4] ByteDance Inc, Singapore 048583, Singapore
[5] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
关键词
Pipelines; Transformers; Proposals; Training; Feature extraction; Task analysis; Detectors; Transformer; temporal action detection; temporal action localization; action recognition;
D O I
10.1109/TIP.2022.3195321
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal action detection (TAD) aims to determine the semantic label and the temporal interval of every action instance in an untrimmed video. It is a fundamental and challenging task in video understanding. Previous methods tackle this task with complicated pipelines. They often need to train multiple networks and involve hand-designed operations, such as non-maximal suppression and anchor generation, which limit the flexibility and prevent end-to-end learning. In this paper, we propose an end-to-end Transformer-based method for TAD, termed TadTR. Given a small set of learnable embeddings called action queries, TadTR adaptively extracts temporal context information from the video for each query and directly predicts action instances with the context. To adapt Transformer to TAD, we propose three improvements to enhance its locality awareness. The core is a temporal deformable attention module that selectively attends to a sparse set of key snippets in a video. A segment refinement mechanism and an actionness regression head are designed to refine the boundaries and confidence of the predicted instances, respectively. With such a simple pipeline, TadTR requires lower computation cost than previous detectors, while preserving remarkable performance. As a self-contained detector, it achieves state-of-the-art performance on THUMOS14 (56.7% mAP) and HACS Segments (32.09% mAP). Combined with an extra action classifier, it obtains 36.75% mAP on ActivityNet-1.3. Code is available at https://github.com/xlliu7/TadTR.
引用
收藏
页码:5427 / 5441
页数:15
相关论文
共 50 条
  • [31] EENED: End-to-End Neural Epilepsy Detection based on Convolutional Transformer
    Liu, Chenyu
    Zhou, Xinliang
    Liu, Yang
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 368 - 371
  • [32] RQFormer: Rotated Query Transformer for end-to-end oriented object detection
    Zhao, Jiaqi
    Ding, Zeyu
    Zhou, Yong
    Zhu, Hancheng
    Du, Wen-Liang
    Yao, Rui
    El Saddik, Abdulmotaleb
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 266
  • [33] Temporal Global Correlation Network for End-to-End Action Proposal Generation
    Ma B.-T.
    Zhang S.-W.
    Gao C.-X.
    Sang N.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (10): : 2452 - 2461
  • [34] An End-to-End Air Writing Recognition Method Based on Transformer
    Tan, Xuhang
    Tong, Jicheng
    Matsumaru, Takafumi
    Dutta, Vibekananda
    He, Xin
    IEEE ACCESS, 2023, 11 : 109885 - 109898
  • [35] End-to-end point cloud registration with transformer
    Wang, Yong
    Zhou, Pengbo
    Geng, Guohua
    An, Li
    Zhang, Qi
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [36] RESC: REfine the SCore with adaptive transformer head for end-to-end object detection
    Wang, Honglie
    Jiang, Rong
    Xu, Jian
    Sun, Shouqian
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (14) : 12017 - 12028
  • [37] RDB-DINO: An Improved End-to-End Transformer With Refined De-Noising and Boxes for Small-Scale Ship Detection in SAR Images
    Qin, Chuan
    Zhang, Linping
    Wang, Xueqian
    Li, Gang
    He, You
    Liu, Yuhui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [38] An End-to-End Transformer Model for Crowd Localization
    Liang, Dingkang
    Xu, Wei
    Bai, Xiang
    COMPUTER VISION - ECCV 2022, PT I, 2022, 13661 : 38 - 54
  • [39] RIMformer: An End-to-End Transformer for FMCW Radar Interference Mitigation
    Zhang, Ziang
    Chen, Guangzhi
    Weng, Youlong
    Yang, Shunchuan
    Jia, Zhiyu
    Chen, Jingxuan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [40] Sequential Transformer for End-to-End Person Search
    Chen, Long
    Xu, Jinhua
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 226 - 238