Monocular 3D Object Detection Utilizing Auxiliary Learning With Deformable Convolution

被引:4
作者
Chen, Jiun-Han [1 ]
Shieh, Jeng-Lun [1 ]
Haq, Muhamad Amirul [1 ]
Ruan, Shanq-Jang [1 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Dept Elect & Comp Engn, Taipei 10607, Taiwan
关键词
Three-dimensional displays; Object detection; Solid modeling; Feature extraction; Training; Computational modeling; Task analysis; 3D object detection; monocular camera; driving scene understanding; auxiliary learning; deep learning;
D O I
10.1109/TITS.2023.3319556
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In autonomous driving systems, the monocular 3D object detection algorithm is a crucial component. The safety of autonomous vehicles heavily depends on a well-designed detection system. Therefore, developing a robust and efficient 3D object detection algorithm is a major goal for institutes and researchers. Having a 3D sense is essential in autonomous vehicles and robotics, as it allows the system to understand its surroundings and react accordingly. Compared with stereo-based and Lidar-based methods, monocular 3D Object detection is a challenging task as it only utilizes 2D information to generate complex 3D features, making it low-cost, less computationally intensive, and with great potential. However, the performance of monocular methods is impaired due to the lack of depth information. In this paper, we propose a simple, end-to-end, and effective network for monocular 3D object detection without the use of external training data. Our work is inspired by auxiliary learning, in which we use a robust feature extractor as our backbone and multiple regression heads to learn auxiliary knowledge. These auxiliary regression heads will be discarded after training for improved inference efficiency, allowing us to take advantage of auxiliary learning and enabling the model to learn critical information more conceptually. The proposed method achieves 17.28% and 20.10% for the moderate level of the Car category on the KITTI benchmark test set and validation set, respectively, which outperforms the previous monocular 3D object detection approaches.
引用
收藏
页码:2424 / 2436
页数:13
相关论文
共 83 条
[31]   Incorporating multi-interest into recommendation with graph convolution networks [J].
Jiang, Nan ;
Zeng, Zilin ;
Wen, Jie ;
Zhou, Jie ;
Liu, Ziyu ;
Wan, Tao ;
Liu, Ximeng ;
Chen, Honglong .
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) :9192-9212
[32]  
Jing L., 2022, arXiv
[33]   PointPillars: Fast Encoders for Object Detection from Point Clouds [J].
Lang, Alex H. ;
Vora, Sourabh ;
Caesar, Holger ;
Zhou, Lubing ;
Yang, Jiong ;
Beijbom, Oscar .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12689-12697
[34]   CornerNet: Detecting Objects as Paired Keypoints [J].
Law, Hei ;
Deng, Jia .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :765-781
[35]   Confidence Guided Stereo 3D Object Detection with Split Depth Estimation [J].
Li, Chengyao ;
Ku, Jason ;
Waslander, Steven L. .
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, :5776-5783
[36]   DN-DETR: Accelerate DETR Training by Introducing Query DeNoising [J].
Li, Feng ;
Zhang, Hao ;
Liu, Shilong ;
Guo, Jian ;
Ni, Lionel M. ;
Zhang, Lei .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :13609-13617
[37]   Stereo R-CNN based 3D Object Detection for Autonomous Driving [J].
Li, Peiliang ;
Chen, Xiaozhi ;
Shen, Shaojie .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7636-7644
[38]  
Liebel L., 2018, ARXIV
[39]  
Liu S., 2019, P ADV NEUR INF PROC, V32, P1
[40]   PE-HEALTH: Enabling Fully Encrypted CNN for Health Monitor with Optimized Communication [J].
Liu, Yang ;
Yang, Yilong ;
Ma, Zhuo ;
Liu, Ximeng ;
Wang, Zhuzhu ;
Ma, Siqi .
2020 IEEE/ACM 28TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2020,