Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy

被引:51
作者
Aghi, Diego [1 ,2 ]
Mazzia, Vittorio [2 ,3 ,4 ]
Chiaberge, Marcello [2 ,3 ]
机构
[1] Politecn Torino, Dept Environm Land & Infrastruct Engn, I-10129 Turin, Italy
[2] Politecn Torino, Interdept Ctr Serv Robot PIC4SeR, I-10129 Turin, Italy
[3] Politecn Torino, Dept Elect & Telecommun, I-10129 Turin, Italy
[4] SmartData PoliTo Big Data & Data Sci Lab, I-10129 Turin, Italy
关键词
agricultural field machines; stereo vision; deep learning; autonomous navigation; edge ai; transfer learning; ROBOT;
D O I
10.3390/machines8020027
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the advent of agriculture 3.0 and 4.0, in view of efficient and sustainable use of resources, researchers are increasingly focusing on the development of innovative smart farming and precision agriculture technologies by introducing automation and robotics into the agricultural processes. Autonomous agricultural field machines have been gaining significant attention from farmers and industries to reduce costs, human workload, and required resources. Nevertheless, achieving sufficient autonomous navigation capabilities requires the simultaneous cooperation of different processes; localization, mapping, and path planning are just some of the steps that aim at providing to the machine the right set of skills to operate in semi-structured and unstructured environments. In this context, this study presents a low-cost, power-efficient local motion planner for autonomous navigation in vineyards based only on an RGB-D camera, low range hardware, and a dual layer control algorithm. The first algorithm makes use of the disparity map and its depth representation to generate a proportional control for the robotic platform. Concurrently, a second back-up algorithm, based on representations learning and resilient to illumination variations, can take control of the machine in case of a momentaneous failure of the first block generating high-level motion primitives. Moreover, due to the double nature of the system, after initial training of the deep learning model with an initial dataset, the strict synergy between the two algorithms opens the possibility of exploiting new automatically labeled data, coming from the field, to extend the existing model's knowledge. The machine learning algorithm has been trained and tested, using transfer learning, with acquired images during different field surveys in the North region of Italy and then optimized for on-device inference with model pruning and quantization. Finally, the overall system has been validated with a customized robot platform in the appropriate environment.
引用
收藏
页数:16
相关论文
共 67 条
[1]  
[Anonymous], 2009, PRECISION AGR 09
[2]   Vineyard Autonomous Navigation in the Echord plus plus GRAPE Experiment [J].
Astolfi, Pietro ;
Gabrielli, Alessandro ;
Bascetta, Luca ;
Matteucci, Matteo .
IFAC PAPERSONLINE, 2018, 51 (11) :704-709
[3]   Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead [J].
Bac, C. Wouter ;
van Henten, Eldert J. ;
Hemming, Jochen ;
Edan, Yael .
JOURNAL OF FIELD ROBOTICS, 2014, 31 (06) :888-911
[4]  
Bargoti Suchet, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P3626, DOI 10.1109/ICRA.2017.7989417
[5]   Grape clusters and foliage detection algorithms for autonomous selective vineyard sprayer [J].
Berenstein, Ron ;
Ben Shahar, Ohad ;
Shapiro, Amir ;
Edan, Yael .
INTELLIGENT SERVICE ROBOTICS, 2010, 3 (04) :233-243
[6]  
BROWN DC, 1971, PHOTOGRAMM ENG, V37, P855
[7]   Counting Apples and Oranges With Deep Learning: A Data-Driven Approach [J].
Chen, Steven W. ;
Shivakumar, Shreyas S. ;
Dcunha, Sandeep ;
Das, Jnaneshwar ;
Okon, Edidiong ;
Qu, Chao ;
Taylor, Camillo J. ;
Kumar, Vijay .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02) :781-788
[8]   A Survey on Deep Transfer Learning [J].
Tan, Chuanqi ;
Sun, Fuchun ;
Kong, Tao ;
Zhang, Wenchang ;
Yang, Chao ;
Liu, Chunfang .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 :270-279
[9]  
Davis B., 2012, Mission Critical, V2, P38
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848