Multi-Scale Human-Object Interaction Detector

被引:12
作者
Cheng, Yamin [1 ]
Wang, Zhi [1 ]
Zhan, Wenhan [1 ]
Duan, Hancong [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
关键词
Transformers; Detectors; Computer architecture; Task analysis; Decoding; Iterative decoding; Feature extraction; Human-object interaction; vision transformer; multi-scale; NETWORK;
D O I
10.1109/TCSVT.2022.3216663
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Transformers are transforming the landscape of computer vision, especially for image-level recognition and instance-level detection tasks. Human-object interaction detection transformer (HOI-TR) is the first transformer-based end-to-end learning system for human-object interaction (HOI) detection; vision transformers build a simple multi-stage structure for multi-scale representation with single-scale patch and are the first patch-based transformer architecture for image-level recognition and instance-level detection. In this paper, we build a transformer-based multi-scale human-object interaction detector (MHOI), a novel method to integrate Vision and HOI detection Transformer, instead of directly incorporating two types of transformers, since the vision transformer lacks hierarchical architecture to handle the large variations in the scale of visual entities due to the single-scale patch partitioning. Specifically, MHOI embeds features of the same size (i.e., sequence length) with patches of variable scales simultaneously by utilizing overlapping convolutional patch embedding, then introduces an efficient transformer decoder that designs the query based on anchor points and essential auxiliary techniques to boost the HOI detection performance. Numerically, extensive experiments on several benchmarks demonstrate that our proposed framework outperforms prior existing methods coherently and achieves the impressive performance of 29.67 mAP on HICO-DET and 58.7 mAP on V-COCO, respectively.
引用
收藏
页码:1827 / 1838
页数:12
相关论文
共 73 条
[1]  
Bansal A, 2020, AAAI CONF ARTIF INTE, V34, P10460
[2]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[3]   Reformulating HOI Detection as Adaptive Set Prediction [J].
Chen, Mingfei ;
Liao, Yue ;
Liu, Si ;
Chen, Zhiyuan ;
Wang, Fei ;
Qian, Chen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9000-9009
[4]   Improved Object Detection With Iterative Localization Refinement in Convolutional Neural Networks [J].
Cheng, Kai-Wen ;
Chen, Yie-Tarng ;
Fang, Wen-Hsien .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (09) :2261-2275
[5]   Human-object interaction detection with depth-augmented clues [J].
Cheng, Yamin ;
Duan, Hancong ;
Wang, Chen ;
Wang, Zhi .
NEUROCOMPUTING, 2022, 500 :978-988
[6]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[7]  
Dai X, 2021, PROC IEEE INT C COMP, P2998
[8]  
Dosovitskiy A., 2021, P INT C LEARN REPR, DOI DOI 10.48550/ARXIV.2010.11929
[9]  
Fang YX, 2021, ADV NEUR IN
[10]  
Gao C., 2018, 2018 International Conference on Radar (RADAR), P1, DOI [10.1109/radar.2018.8557284, DOI 10.1109/RADAR.2018.8557284]