Multi-Scale Human-Object Interaction Detector

被引:11
作者
Cheng, Yamin [1 ]
Wang, Zhi [1 ]
Zhan, Wenhan [1 ]
Duan, Hancong [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
关键词
Transformers; Detectors; Computer architecture; Task analysis; Decoding; Iterative decoding; Feature extraction; Human-object interaction; vision transformer; multi-scale; NETWORK;
D O I
10.1109/TCSVT.2022.3216663
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Transformers are transforming the landscape of computer vision, especially for image-level recognition and instance-level detection tasks. Human-object interaction detection transformer (HOI-TR) is the first transformer-based end-to-end learning system for human-object interaction (HOI) detection; vision transformers build a simple multi-stage structure for multi-scale representation with single-scale patch and are the first patch-based transformer architecture for image-level recognition and instance-level detection. In this paper, we build a transformer-based multi-scale human-object interaction detector (MHOI), a novel method to integrate Vision and HOI detection Transformer, instead of directly incorporating two types of transformers, since the vision transformer lacks hierarchical architecture to handle the large variations in the scale of visual entities due to the single-scale patch partitioning. Specifically, MHOI embeds features of the same size (i.e., sequence length) with patches of variable scales simultaneously by utilizing overlapping convolutional patch embedding, then introduces an efficient transformer decoder that designs the query based on anchor points and essential auxiliary techniques to boost the HOI detection performance. Numerically, extensive experiments on several benchmarks demonstrate that our proposed framework outperforms prior existing methods coherently and achieves the impressive performance of 29.67 mAP on HICO-DET and 58.7 mAP on V-COCO, respectively.
引用
收藏
页码:1827 / 1838
页数:12
相关论文
共 73 条
  • [1] Bansal A, 2020, AAAI CONF ARTIF INTE, V34, P10460
  • [2] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [3] Chen Gao, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12357), P696, DOI 10.1007/978-3-030-58610-2_41
  • [4] Reformulating HOI Detection as Adaptive Set Prediction
    Chen, Mingfei
    Liao, Yue
    Liu, Si
    Chen, Zhiyuan
    Wang, Fei
    Qian, Chen
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9000 - 9009
  • [5] Improved Object Detection With Iterative Localization Refinement in Convolutional Neural Networks
    Cheng, Kai-Wen
    Chen, Yie-Tarng
    Fang, Wen-Hsien
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (09) : 2261 - 2275
  • [6] Human-object interaction detection with depth-augmented clues
    Cheng, Yamin
    Duan, Hancong
    Wang, Chen
    Wang, Zhi
    [J]. NEUROCOMPUTING, 2022, 500 : 978 - 988
  • [7] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [8] Dong-Jin Kim, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12366), P718, DOI 10.1007/978-3-030-58589-1_43
  • [9] Dosovitskiy A., 2020, ICLR, V20, DOI 10.48550/arXiv.2010.11929
  • [10] Fang Y., 2021, PROC ADV NEURAL INF, P1