Real-Time Grading of Defect Apples Using Semantic Segmentation Combination with a Pruned YOLO V4 Network

被引:30
|
作者
Liang, Xiaoting [1 ,2 ,3 ]
Jia, Xueying [1 ,2 ,3 ]
Huang, Wenqian [1 ,3 ]
He, Xin [1 ,3 ]
Li, Lianjie [1 ,3 ]
Fan, Shuxiang [1 ,3 ]
Li, Jiangbo [1 ,3 ]
Zhao, Chunjiang [1 ,3 ]
Zhang, Chi [1 ,3 ]
机构
[1] Beijing Acad Agr & Forestry Sci, Intelligent Equipment Res Ctr, Beijing 100097, Peoples R China
[2] Shanghai Ocean Univ, Coll Informat Technol, Shanghai 201306, Peoples R China
[3] Natl Res Ctr Intelligent Equipment Agr, Beijing 100097, Peoples R China
基金
中国国家自然科学基金;
关键词
defective apples; apple grading; deep learning; object detection; semantic segmentation; IDENTIFICATION; CLASSIFICATION; FRUITS;
D O I
10.3390/foods11193150
中图分类号
TS2 [食品工业];
学科分类号
0832 ;
摘要
At present, the apple grading system usually conveys apples by a belt or rollers. This usually leads to low hardness or expensive fruits being bruised, resulting in economic losses. In order to realize real-time detection and classification of high-quality apples, separate fruit trays were designed to convey apples and used to prevent apples from being bruised during image acquisition. A semantic segmentation method based on the BiSeNet V2 deep learning network was proposed to segment the defective parts of defective apples. BiSeNet V2 for apple defect detection obtained a slightly better result in MPA with a value of 99.66%, which was 0.14 and 0.19 percentage points higher than DAnet and Unet, respectively. A model pruning method was used to optimize the structure of the YOLO V4 network. The detection accuracy of defect regions in apple images was further improved by the pruned YOLO V4 network. Then, a surface mapping method between the defect area in apple images and the actual defect area was proposed to accurately calculate the defect area. Finally, apples on separate fruit trays were sorted according to the number and area of defects in the apple images. The experimental results showed that the average accuracy of apple classification was 92.42%, and the F1 score was 94.31. In commercial separate fruit tray grading and sorting machines, it has great application potential.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] ThunderNet: A Turbo Unified Network for Real-Time Semantic Segmentation
    Xiang, Wei
    Mao, Hongda
    Athitsos, Vassilis
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1789 - 1796
  • [22] RTSNet: Real-Time Semantic Segmentation Network For Outdoor Scenes
    Ma, Mingyu
    Zou, Fengshan
    Xu, Fang
    Song, Jilai
    2019 9TH IEEE ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER 2019), 2019, : 659 - 664
  • [23] Tripartite real-time semantic segmentation network with scene commonality
    Wang, Chenyang
    Wang, Chuanxu
    Liu, Peng
    Zhang, Zhe
    Lin, Guocheng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [24] Real-time semantic segmentation with dual interaction fusion network
    Qu, Shenming
    Duan, Jiale
    Lu, Yongyong
    Cui, Can
    Xie, Yuan
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [25] A lightweight network with attention decoder for real-time semantic segmentation
    Kang Wang
    Jinfu Yang
    Shuai Yuan
    Mingai Li
    The Visual Computer, 2022, 38 : 2329 - 2339
  • [26] Contextual Attention Refinement Network for Real-Time Semantic Segmentation
    Hao, Shijie
    Zhou, Yuan
    Zhang, Youming
    Guo, Yanrong
    IEEE ACCESS, 2020, 8 (08): : 55230 - 55240
  • [27] Lightweight Asymmetric Dilation Network for Real-Time Semantic Segmentation
    Hu, Xuegang
    Gong, Yu
    IEEE ACCESS, 2021, 9 : 55630 - 55643
  • [28] A lightweight network with attention decoder for real-time semantic segmentation
    Wang, Kang
    Yang, Jinfu
    Yuan, Shuai
    Li, Mingai
    VISUAL COMPUTER, 2022, 38 (07): : 2329 - 2339
  • [29] A Lightweight and Dynamic Convolutional Network for Real-time Semantic Segmentation
    Zhang, Chunyu
    Xu, Fang
    Wu, Chengdong
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 4062 - 4067