One-shot domain adaptive real-time 3D obstacle detection in farmland based on semantic-geometry-intensity fusion strategy

被引:6
作者
Wang, Tianhai [1 ]
Wang, Ning [2 ]
Xiao, Jianxing [1 ]
Miao, Yanlong [1 ]
Sun, Yifan [1 ]
Li, Han [2 ]
Zhang, Man [1 ]
机构
[1] China Agr Univ, Minist Educ, Key Lab Smart Agr Syst Integrat, Beijing 100083, Peoples R China
[2] China Agr Univ, Minist Agr & Rural Affairs, Key Lab Agr Informat Acquisit Technol, Beijing 100083, Peoples R China
关键词
3D obstacle detection; LiDAR; One-shot domain adaptation; Autonomous navigation; Smart agriculture; LOCAL DESCRIPTOR; LIDAR; HISTOGRAMS;
D O I
10.1016/j.compag.2023.108264
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
By introducing deep learning, LiDAR-based solutions have achieved impressive accuracy in 3D obstacle detection. However, gathering and labeling sufficient samples is the precondition for the effectiveness of existing solutions. This precondition is difficult to satisfy in actual farmland due to the scarcity of distinctive obstacle samples as well as the time-consuming and specialized labeling process. In practice, detection models trained on specific datasets may fail to generalize well to real farmland, as they lack adaptability to different categories and scenes. To address this limitation, this paper proposes a novel one-shot domain adaptive real-time 3D obstacle detection method based on semantic-geometry-intensity fusion strategy. By introducing the concept of one-shot domain adaptation, the proposed method enables fine-grained 3D obstacle detection with just one sample per category. Specifically, a semantic-geometry-intensity space generator is designed to bridge the category gap between training and test samples. The integration of semantic-geometry-intensity space-based classifier and centerpoint-based anchor-free locator is designed to keep a decent balance between accuracy and efficiency. The switching between object sample enhancer and fusion point cloud generator is designed to handle distribution differences of both points and categories. The obstacle detection system designed based on the proposed method has been tested in real farmland, achieving an overall F1 score of 89.54% and a Frame Rate of 21.32 Frames Per Second (FPS). These experimental results demonstrate the high accuracy and efficiency of the proposed method in performing obstacle detection.
引用
收藏
页数:17
相关论文
共 55 条
[1]   Deep learning-based robust positioning for all-weather autonomous driving [J].
Almalioglu, Yasin ;
Turan, Mehmet ;
Trigoni, Niki ;
Markham, Andrew .
NATURE MACHINE INTELLIGENCE, 2022, 4 (09) :749-+
[2]   Few-Shot Object Detection: A Survey [J].
Antonelli, Simone ;
Avola, Danilo ;
Cinque, Luigi ;
Crisostomi, Donato ;
Foresti, Gian Luca ;
Galasso, Fabio ;
Marini, Marco Raoul ;
Mecca, Alessio ;
Pannone, Daniele .
ACM COMPUTING SURVEYS, 2022, 54 (11S)
[3]   Multi-Level Semantic Feature Augmentation for One-Shot Learning [J].
Chen, Zitian ;
Fu, Yanwei ;
Zhang, Yinda ;
Jiang, Yu-Gang ;
Xue, Xiangyang ;
Sigal, Leonid .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) :4594-4605
[4]   LiDAR few-shot domain adaptation via integrated CycleGAN and 3D object detector with joint learning delay [J].
Corral-Soto, Eduardo R. ;
Nabatchian, Amir ;
Gerdzhev, Martin ;
Liu Bingbing .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :13099-13105
[5]  
Deng JJ, 2021, AAAI CONF ARTIF INTE, V35, P1201
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
Farhadi A, 2009, PROC CVPR IEEE, P1778, DOI 10.1109/CVPRW.2009.5206772
[8]  
Ge R., 2020, arXiv, DOI 10.48550/arXiv.2006.12671
[9]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[10]   Concept of an Automotive LiDAR Target Simulator for Direct Time-of-Flight LiDAR [J].
Grollius, Sara ;
Ligges, Manuel ;
Ruskowski, Jennifer ;
Grabmaier, Anton .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01) :825-835