Deep learning-based semantic segmentation with novel navigation line extraction for autonomous agricultural robots

被引:1
作者
Nkwocha, Chijioke Leonard [1 ,2 ]
Wang, Ning [1 ]
机构
[1] Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, 74078, OK
[2] Present Address: Department of Biological Systems Engineering, Virginia Tech, Blacksburg, 24061, VA
来源
Discover Artificial Intelligence | 2025年 / 5卷 / 01期
基金
美国国家科学基金会;
关键词
Agricultural robot; Autonomous navigation; Computer vision; Deep learning; Semantic segmentation; Transfer learning;
D O I
10.1007/s44163-025-00301-0
中图分类号
学科分类号
摘要
In the rapidly evolving field of agriculture, the integration of autonomous systems is essential for enhancing productivity and efficiency. This study addresses the challenge of navigation line extraction in autonomous agricultural robots, a critical component for precise field operations. In this study, three deep learning-based semantic segmentation models, ENet, Deeplabv3+, and PSPNet, were trained on corn crop row images to extract the traversable path of the robot. Then, we proposed a novel navigation line extraction algorithm based on the Douglas–Peucker algorithm which uses the output mask from the semantic segmentation to extract the centre navigation line for robot guidance. The results of the experiments showed that PSPNet achieved the highest mean intersection over union (mIoU) of 96.50%, followed by Deeplabv3+ (96.30%) and ENet (95.13%). While ENet showed more consistent performance across various lighting conditions, the novel algorithm demonstrated remarkable accuracy, reducing angle errors to an average of 1.1°, 1.6°, and 1.6° for ENet, Deeplabv3+, and PSPNet, respectively, compared to 4.6°, 4.9°, and 6.7° using the baseline method. This improvement is critical for ensuring precise and stable robot navigation in agricultural fields. Beyond its technical contributions, this study offers practical implications for real-world deployment, ensuring reliable operation in dynamic agricultural environments. By addressing limitations in conventional navigation techniques, the proposed method enhances robot maneuverability in corn crop fields, particularly under challenging lighting conditions. However, its application across different crop types and terrains, further optimizing real-time processing efficiency, are yet to be explored. This work represents a significant step toward achieving robust, autonomous navigation in precision agriculture. © The Author(s) 2025.
引用
收藏
相关论文
共 69 条
[1]  
Li J., Yin J., Deng L., A robot vision navigation method using deep learning in edge computing environment, EURASIP J Adv Signal Process, 2021, (2021)
[2]  
Ozkan G., Gurbuz &#X.0130
[3]  
., Nasirov E., A greener future: the addictive role of technology in enhancing ecoliteracy in rural community, FRESENIUS Environ Bull, 29, pp. 4372-4378, (2020)
[4]  
Wang G., Route choice of rural economic development in offshore areas from the perspective of modern agriculture, J Coast Res, 98, pp. 247-250, (2019)
[5]  
Bac C.W., Van Henten E.J., Hemming J., Edan Y., Harvesting robots for high-value crops: state-of-the-art review and challenges ahead: harvesting robots for high-value crops: state-of-the-art review and challenges ahead, J Field Robot, 31, pp. 888-911, (2014)
[6]  
Cao M., Tang F., Ji P., Ma F., Improved real-time semantic segmentation network model for crop vision navigation line detection, Front Plant Sci, 13, (2022)
[7]  
Song Y., Xu F., Yao Q., Liu J., Yang S., Navigation algorithm based on semantic segmentation in wheat fields using an RGB-D camera, Inf Process Agric, 10, pp. 475-490, (2023)
[8]  
Winterhalter W., Fleckenstein F., Dornhege C., Burgard W., Localization for precision navigation in agricultural fields—beyond crop row following, J Field Robot, (2020)
[9]  
Yin X., Wang Y., Chen Y., Jin C., Du J., Development of autonomous navigation controller for agricultural vehicles, Int J Agric Biol Eng, 13, pp. 70-76, (2020)
[10]  
Chong T., Lee S., Oh C.-H., Park D., Accelerated signal processing of burst-mode streamline data for low-power embedded multi-channel LiDAR systems, Proceedings of the 2021 IEEE region 10 symposium (TENSYMP), pp. 1-4, (2021)