Real-Time Classification of Chicken Parts in the Packaging Process Using Object Detection Models Based on Deep Learning

被引:0
作者
Sahin, Dilruba [1 ]
Torkul, Orhan [1 ]
Sisci, Merve [2 ,3 ]
Diren, Deniz Demircioglu [3 ]
Yilmaz, Recep [4 ]
Kibar, Alpaslan [5 ]
机构
[1] Sakarya Univ, Ind Engn Dept, TR-54050 Sakarya, Turkiye
[2] Kutahya Dumlupinar Univ, Ind Engn Dept, TR-43300 Kutahya, Turkiye
[3] Sakarya Univ, Dept Informat Syst & Technol, TR-54050 Sakarya, Turkiye
[4] Sakarya Univ, Business Sch, TR-54050 Sakarya, Turkiye
[5] Sakarya Univ, Dept Management Informat Syst, TR-54050 Sakarya, Turkiye
关键词
chicken parts; deep learning; image processing; object detection; reducing waste and costs; RT-DETR; YOLOv8; FRAMEWORK; HEALTH;
D O I
10.3390/pr13041005
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Chicken meat plays an important role in the healthy diets of many people and has a large global trade volume. In the chicken meat sector, in some production processes, traditional methods are used. Traditional chicken part sorting methods are often manual and time-consuming, especially during the packaging process. This study aimed to identify and classify the chicken parts for their input during the packaging process with the highest possible accuracy and speed. For this purpose, deep-learning-based object detection models were used. An image dataset was developed for the classification models by collecting the image data of different chicken parts, such as legs, breasts, shanks, wings, and drumsticks. The models were trained by the You Only Look Once version 8 (YOLOv8) algorithm variants and the Real-Time Detection Transformer (RT-DETR) algorithm variants. Then, they were evaluated and compared based on precision, recall, F1-Score, mean average precision (mAP), and Mean Inference Time per frame (MITF) metrics. Based on the obtained results, the YOLOv8s model outperformed the other models developed with other YOLOv8 versions and the RT-DETR algorithm versions by obtaining values of 0.9969, 0.9950, and 0.9807 for the F1-score, mAP@0.5, and mAP@0.5:0.95, respectively. It has been proven suitable for real-time applications with an MITF value of 10.3 ms/image.
引用
收藏
页数:21
相关论文
共 84 条
[1]   Computer vision assisted human computer interaction for logistics management using deep learning [J].
Abosuliman, Shougi Suliman ;
Almagrabi, Alaa Omran .
COMPUTERS & ELECTRICAL ENGINEERING, 2021, 96 (96)
[2]  
Aishwarya N., 2023, 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), P663, DOI 10.1109/I-SMAC58438.2023.10290326
[3]  
Akgul I, 2023, Sak. Univ. J. Sci, V27, P442, DOI [10.16984/saufenbilder.1221346, DOI 10.16984/SAUFENBILDER.1221346]
[4]   Toward real-time polyp detection using fully CNNs for 2D Gaussian shapes prediction [J].
Ali, Hemin Ali ;
Shin, Younghak ;
Solhusvik, Johannes ;
Bergsland, Jacob ;
Aabakken, Lars ;
Balasingham, Ilangko .
MEDICAL IMAGE ANALYSIS, 2021, 68
[5]  
Alpdemir M.N., 2022, Sak. Univ. J. Comput. Inf. Sci, V5, P385, DOI [10.35377/saucis ... 1196381 ..., DOI 10.35377/SAUCIS ... 1196381 ...]
[6]   Deep Learning-Based Real-Time Engine Part Inspection With Collaborative Robot Application [J].
Ardic, Onur ;
Cetinel, Gokcen .
IEEE ACCESS, 2024, 12 :187483-187497
[7]  
Asmara RA, 2018, 2018 JOINT 7TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2018 2ND INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), P93, DOI 10.1109/ICIEV.2018.8640992
[8]   Precise seam tracking in robotic welding by an improved image processing approach [J].
Banafian, Nahid ;
Fesharakifard, Rasul ;
Menhaj, Mohammad Bagher .
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2021, 114 (1-2) :251-270
[9]  
Bawankule R., 2023, 2023 INT C SUST COMP, P869
[10]   A low-cost UAV framework towards ornamental plant detection and counting in the wild [J].
Bayraktar, Ertugrul ;
Basarkan, Muhammed Enes ;
Celebi, Numan .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 167 :1-11