Design, integration, and evaluation of a robotic peach packaging system based on deep learning

被引:7
作者
Wang, Qingyu [1 ,2 ,3 ]
Wu, Dihua [1 ,2 ,3 ]
Sun, Zhizhong [4 ]
Zhou, Mingchuan [1 ,2 ,3 ]
Cui, Di [1 ,2 ,3 ]
Xie, Lijuan [1 ,2 ,3 ]
Hu, Dong [5 ]
Rao, Xiuqin [1 ,2 ,3 ]
Jiang, Huanyu [1 ,2 ,3 ]
Ying, Yibin [1 ,2 ,3 ,6 ]
机构
[1] Zhejiang Univ, Coll Biosyst Engn & Food Sci, Hangzhou 310058, Zhejiang, Peoples R China
[2] Zhejiang Univ, Key Lab Intelligent Equipment & Robot Agr Zhejiang, Hangzhou 310058, Peoples R China
[3] Minist Agr & Rural Affairs, Key Lab Site Proc Equipment Agr Prod, Beijing, Peoples R China
[4] Zhejiang A&F Univ, Coll Math & Comp Sci, Hangzhou 311300, Peoples R China
[5] Zhejiang A&F Univ, Coll Opt Mech & Elect Engn, Hangzhou 311300, Peoples R China
[6] Zhejiang Univ, Coll Biosyst Engn & Food Sci, 866 Yuhangtang Rd, Hangzhou 310058, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Robotic fruit packaging; Deep learning; YOLO v5; Hand eye calibration; System integration;
D O I
10.1016/j.compag.2023.108013
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Fruit packaging is one of the most time-consuming and labor-intensive tasks during postharvest commerciali-zation. With the aging of the global population, it is necessary to apply robot to replace the manual manipulation. However, robotic packaging for fragile fruit is more complicated and difficult, compared with other postharvest processes, such as quality detection and grading. In this study, for better positioning accuracy and grasping robustness, we developed a prototype for peach packaging robot based on deep learning. First, the dataset for peach object detection was built, and YOLO v5 models with different width and depth were trained in an end-to -end manner on this dataset. Considering the requirements on both accuracy and real-time performance in fruit postharvest processing, YOLO v5 -S was finally adopted as the peach detection model for robotic manipulation, which can achieve mAP-50 = 0.996 on validation set and runs at 142.86 fps on RTX 3060. Next, the "Eye-on -Base" hand eye calibration method was used to solve the coordinate transformation matrix from the camera coordinate system to the robot base coordinate system. The results on landmark positioning experiment showed that the positioning error along X and Y axes was 4.87 mm and 5.00 mm on average, respectively. The posi-tioning error alone Z direction was 18.47 mm on average, which was caused mainly by the depth perception error of the RGB-D camera. In the grasping experiment, the influence of the depth perception accuracy was explored, the grasping success rate for small, medium, and large size of peaches was 100 %, 97 %, and 97 %, respectively. Besides, the entire pipeline proposed in this study took 252.81 ms on average for depth perception, object detection, coordinate transformation, and grasping path planning. Finally, the early stage bruise of the peaches was also evaluated through SFDI technology. In general, this research provided a feasible and reliable scheme for the fruit packaging robot, which is potential to be deployed in postharvest commercialization.
引用
收藏
页数:15
相关论文
共 53 条
  • [1] Development of a sweet pepper harvesting robot
    Arad, Boaz
    Balendonck, Jos
    Barth, Ruud
    Ben-Shahar, Ohad
    Edan, Yael
    Hellstrom, Thomas
    Hemming, Jochen
    Kurtser, Polina
    Ringdahl, Ola
    Tielen, Toon
    van Tuijl, Bart
    [J]. JOURNAL OF FIELD ROBOTICS, 2020, 37 (06) : 1027 - 1039
  • [2] A field-tested robotic harvesting system for iceberg lettuce
    Birrell, Simon
    Hughes, Josie
    Cai, Julia Y.
    Iida, Fumiya
    [J]. JOURNAL OF FIELD ROBOTICS, 2020, 37 (02) : 225 - 245
  • [3] YOLACT Real-time Instance Segmentation
    Bolya, Daniel
    Zhou, Chong
    Xiao, Fanyi
    Lee, Yong Jae
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9156 - 9165
  • [4] A Robot System for Pruning Grape Vines
    Botterill, Tom
    Paulin, Scott
    Green, Richard
    Williams, Samuel
    Lin, Jessica
    Saxton, Valerie
    Mills, Steven
    Chen, XiaoQi
    Corbett-Davies, Sam
    [J]. JOURNAL OF FIELD ROBOTICS, 2017, 34 (06) : 1100 - 1122
  • [5] Design and evaluation of a modular robotic plum harvesting system utilizing soft components
    Brown, Jasper
    Sukkarieh, Salah
    [J]. JOURNAL OF FIELD ROBOTICS, 2021, 38 (02) : 289 - 306
  • [6] You Only Look One-level Feature
    Chen, Qiang
    Wang, Yingming
    Yang, Tong
    Zhang, Xiangyu
    Cheng, Jian
    Sun, Jian
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13034 - 13043
  • [7] Cuevas-Velasquez H, 2020, IEEE INT CONF ROBOT, P7050, DOI [10.1109/icra40945.2020.9197272, 10.1109/ICRA40945.2020.9197272]
  • [8] Segmentation and 3D reconstruction of rose plants from stereoscopic images
    Cuevas-Velasquez, Hanz
    Gallego, Antonio-Javier
    Fisher, Robert B.
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 171
  • [9] Dewi Tresna., 2020, Bulletin of Electrical Engineering and Informatics, V9, P1438
  • [10] Simultaneous robot-world and hand-eye calibration
    Dornaika, F
    Horaud, R
    [J]. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1998, 14 (04): : 617 - 622