Navigation path extraction for greenhouse cucumber-picking robots using the prediction-point Hough transform

被引:86
作者
Chen, Jiqing [1 ,2 ]
Qiang, Hu [1 ]
Wu, Jiahua [1 ]
Xu, Guanwen [1 ]
Wang, Zhikui [1 ]
机构
[1] Guangxi Univ, Coll Mechatron Engn, Nanning 530007, Peoples R China
[2] Guangxi Mfg Syst & Adv Mfg Technol Key Lab, Nanning 530007, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine vision; Autonomous navigation; Agricultural robot; Prediction-point Hough transform; Grayscale factor; CROP-ROW DETECTION; AUTOMATIC DETECTION; MAIZE FIELDS; SYSTEM; ALGORITHM; DESIGN; IMAGES;
D O I
10.1016/j.compag.2020.105911
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Accurate extraction of navigation path is very important for automated navigation of agricultural robots. Based on the machine vision system, this paper proposes a new algorithm for the fitting of navigation path of greenhouse cucumber-picking robots. Aiming at the problems that the traditional Hough transform has a large amount of calculation and the least square method has a low precision, the prediction point Hough transform algorithm is proposed to extract the navigation path. The prediction-point Hough transform contains 4 parts: intercept area of interest, image segmentation, navigation point extraction, navigation path fitting. In this paper, only the final 160-pixel rows of image captured by the camera are taken as the region of interest. In the image segmentation stage, this paper proposes a new graying factor. For navigation path extraction, a regression equation is used to determine the prediction point, and finally the proposed prediction point Hough transform is used to fit the navigation path. The experimental results show that the proposed grayscale factor can well segment cucumber plants and soil, and the segmentation effect is better than 2G-B-R and G-B grayscale factors. The proposed prediction point Hough transform fits the navigation paths with an average error less than 0.5 degrees, which is 10.25 degrees lower than the average error of the least-square method. Also, the computation time of the proposed Hough transform is 17.92 ms. Compared with the traditional Hough transform, it saves 35.20 ms.
引用
收藏
页数:12
相关论文
共 49 条
[1]   An agricultural mobile robot with vision-based perception for mechanical weed control [J].
Åstrand, B ;
Baerveldt, AJ .
AUTONOMOUS ROBOTS, 2002, 13 (01) :21-35
[2]   Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead [J].
Bac, C. Wouter ;
van Henten, Eldert J. ;
Hemming, Jochen ;
Edan, Yael .
JOURNAL OF FIELD ROBOTICS, 2014, 31 (06) :888-911
[3]   Agricultural robotic platform with four wheel steering for weed detection [J].
Bak, T ;
Jakobsen, H .
BIOSYSTEMS ENGINEERING, 2004, 87 (02) :125-136
[4]   A vision based row detection system for sugar beet [J].
Bakker, Tijmen ;
Wouters, Hendrik ;
van Asselt, Kees ;
Bontsema, Jan ;
Tang, Lie ;
Mueller, Joachim ;
van Straten, Gerrit .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2008, 60 (01) :87-95
[5]   Development of an autonomous navigation system using a two-dimensional laser scanner in an orchard application [J].
Barawid, Oscar C., Jr. ;
Mizushima, Akira ;
Ishii, Kazunobu ;
Noguchi, Noboru .
BIOSYSTEMS ENGINEERING, 2007, 96 (02) :139-149
[6]   Extracting the navigation path of a tomato-cucumber greenhouse robot based on a median point Hough transform [J].
Chen, Jiqing ;
Qiang, Hu ;
Wu, Jiahua ;
Xu, Guanwen ;
Wang, Zhikui ;
Liu, Xu .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 174
[7]   The Green Building Design of Subtropical Ocean Climate Adaptability [J].
Chen, Lei .
JOURNAL OF COASTAL RESEARCH, 2020, :50-54
[8]   Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields [J].
Choi, Keun Ha ;
Han, Sang Kwon ;
Han, Sang Hoon ;
Park, Kwang-Ho ;
Kim, Kyung-Soo ;
Kim, Soohyun .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2015, 113 :266-274
[9]  
Cui YP, 2015, INT CONF SOFTW ENG, P766, DOI 10.1109/ICSESS.2015.7339169
[10]   Development of a Robot for Harvesting Strawberries [J].
De Preter, Andreas ;
Anthonis, Jan ;
De Baerdemaeker, Josse .
IFAC PAPERSONLINE, 2018, 51 (17) :14-19