Intelligent electronic components waste detection in complex occlusion environments based on the focusing dynamic channel-you only look once model

被引:0
|
作者
Liu, Huilin [1 ]
Jiang, Yu [1 ]
Zhang, Wenkang [2 ]
Li, Yan [4 ]
Ma, Wanqi [3 ]
机构
[1] Anhui Univ Sci & Technol, Sch Comp Sci & Engn, Huainan 232001, Peoples R China
[2] Anhui Agr Univ, Coll Engn, Hefei 230036, Peoples R China
[3] Jiangnan Univ, Sch Business, Wuxi 214122, Peoples R China
[4] Macquarie Univ, Sch Comp, Sydney, Australia
基金
美国国家科学基金会;
关键词
Object detection; Waste detection; Waste electrical and electronic equipment; Computer vision;
D O I
10.1016/j.jclepro.2024.144425
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The exponential increase in electronic waste has become a major worldwide issue, driven by the rapid technological advances and the proliferation of the consumer electronics market. Due to reduced product lifespans, the process of recycling of e-waste such as Printed circuit boards is subsequently of significant importance. Wasted Printed Circuit Boards (PCBs) typically contain a large number of high-value materials, hazardous substances as well as many electronic components, which will inevitably complicate the recycling process. In the context of electronic waste recycling, the high degree of occlusion and complex overlapping relationships between electronic components frequently render traditional detection methods ineffective in separating and identifying the components. This often results in misdetection and missed detection, which significantly reduces the overall detection accuracy and reliability. In this study, we innovatively construct a multi-label, multi-scale, and multi-occlusion electronic component hybrid image dataset called OEWaste (Occlusion Electronic Waste), which aims to deeply characterize complex occlusion features under different levels of occlusion and realistically reproduce the visual dynamics in the electronic waste recycling scenario. Building on this dataset, we propose a model to detect occlusion in electronic components based on FDC-YOLO. By employing a self-developed network that enhances feature propagation through targeted focus and contextual diffusion, we improve the model's capability to comprehend occluded electronic components. This study marks the first application of the Dynamic Head (DyHead) module, which enhances multi-scale feature representation, and the Channel Prior Convolutional Attention (CPCA) module, which improves feature prioritization by focusing on channel-wise dependencies, in the context of occlusion detection for electronic components. The introduction of these modules, combined with the "scale-space-task" triple perception mechanism, significantly boosts detection performance in occluded environments, achieving a mAP of 93.8%, which represents a 3.7% improvement compared to traditional methods without these enhancements.
引用
收藏
页数:13
相关论文
共 2 条
  • [1] A lightweight model based on you only look once for pomegranate before fruit thinning in complex environment
    Du, Yurong
    Han, Youpan
    Su, Yaoheng
    Wang, Jiuxin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [2] Teacher-Student Model Using Grounding DINO and You Only Look Once for Multi-Sensor-Based Object Detection
    Son, Jinhwan
    Jung, Heechul
    APPLIED SCIENCES-BASEL, 2024, 14 (06):