Efficient detection of eyes on potato tubers using deep-learning for robotic high-throughput sampling

被引:1
作者
Divyanth, L. G. [1 ]
Khanal, Salik Ram [1 ]
Paudel, Achyut [1 ]
Mattupalli, Chakradhar [2 ]
Karkee, Manoj [1 ]
机构
[1] Washington State Univ, Ctr Precis & Automated Agr Syst, Dept Biol Syst Engn, Prosser, WA 99350 USA
[2] Washington State Univ, Mt Vernon Northwestern Washington Res & Extens Ctr, Dept Plant Pathol, Mt Vernon, WA USA
关键词
tissue sampling robot; machine vision; molecular diagnostics; potato pathogens; FTA card; YOLO; VISION;
D O I
10.3389/fpls.2024.1512632
中图分类号
Q94 [植物学];
学科分类号
071001 ;
摘要
Molecular-based detection of pathogens from potato tubers hold promise, but the initial sample extraction process is labor-intensive. Developing a robotic tuber sampling system, equipped with a fast and precise machine vision technique to identify optimal sampling locations on a potato tuber, offers a viable solution. However, detecting sampling locations such as eyes and stolon scar is challenging due to variability in their appearance, size, and shape, along with soil adhering to the tubers. In this study, we addressed these challenges by evaluating various deep-learning-based object detectors, encompassing You Look Only Once (YOLO) variants of YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, and YOLO11, for detecting eyes and stolon scars across a range of diverse potato cultivars. A robust image dataset obtained from tubers of five potato cultivars (three russet skinned, a red skinned, and a purple skinned) was developed as a benchmark for detection of these sampling locations. The mean average precision at an intersection over union threshold of 0.5 (mAP@0.5) ranged from 0.832 and 0.854 with YOLOv5n to 0.903 and 0.914 with YOLOv10l. Among all the tested models, YOLOv10m showed the optimal trade-off between detection accuracy (mAP@0.5 of 0.911) and inference time (92 ms), along with satisfactory generalization performance when cross-validated among the cultivars used in this study. The model benchmarking and inferences of this study provide insights for advancing the development of a robotic potato tuber sampling device.
引用
收藏
页数:13
相关论文
共 44 条
[1]   Local Similarity-Based Spatial-Spectral Fusion Hyperspectral Image Classification With Deep CNN and Gabor Filtering [J].
Bhatti, Uzair Aslam ;
Yu, Zhaoyuan ;
Chanussot, Jocelyn ;
Zeeshan, Zeeshan ;
Yuan, Linwang ;
Luo, Wen ;
Nawaz, Saqib Ali ;
Bhatti, Mughair Aslam ;
ul Ain, Qurat ;
Mehmood, Anum .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[2]  
Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, 10.48550/arXiv.2004.10934, DOI 10.48550/ARXIV.2004.10934]
[3]   On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data [J].
Carranza-Garcia, Manuel ;
Torres-Mateo, Jesus ;
Lara-Benitez, Pedro ;
Garcia-Gutierrez, Jorge .
REMOTE SENSING, 2021, 13 (01) :1-23
[4]   YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems [J].
Dang, Fengying ;
Chen, Dong ;
Lu, Yuzhen ;
Li, Zhaojian .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 205
[5]  
Dazlee N.M.A.A., 2022, INT J INTELL SYST AP, V10, P129, DOI DOI 10.18201/IJISAE.2022.276
[6]   Object detection using YOLO: challenges, architectural successors, datasets and applications [J].
Diwan, Tausif ;
Anirudh, G. ;
Tembhurne, Jitendra, V .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (06) :9243-9275
[7]  
Ghiasi G, 2018, ADV NEUR IN, V31
[8]   Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (09) :1904-1916
[9]   A Review of Yolo Algorithm Developments [J].
Jiang, Peiyuan ;
Ergu, Daji ;
Liu, Fangyao ;
Cai, Ying ;
Ma, Bo .
8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT (ITQM 2020 & 2021): DEVELOPING GLOBAL DIGITAL ECONOMY AFTER COVID-19, 2022, 199 :1066-1073
[10]   A Survey of Deep Learning-Based Object Detection [J].
Jiao, Licheng ;
Zhang, Fan ;
Liu, Fang ;
Yang, Shuyuan ;
Li, Lingling ;
Feng, Zhixi ;
Qu, Rong .
IEEE ACCESS, 2019, 7 :128837-128868