Feature matching driven background generalization neural networks for surface defect segmentation

被引:3
作者
Chen, Biao [1 ]
Niu, Tongzhi [1 ]
Zhang, Ruoqi [3 ]
Zhang, Hang [1 ]
Lin, Yuchen [1 ]
Li, Bin [1 ,2 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Mech Sci & Engn, Hubei 430074, Peoples R China
[2] Wuhan Intelligent Equipment Ind Inst Co Ltd, 8 Ligou South Rd, Wuhan 430074, Hubei, Peoples R China
[3] Huazhong Univ Sci & Technol, China EU Inst Clean & Renewable Energy, Wuhan, Hubei, Peoples R China
关键词
Surface defect detection; Neural networks; Feature matching; Background generalization;
D O I
10.1016/j.knosys.2024.111451
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we address the challenge of background generalization in surface defect segmentation for surfacemounted device chips, particularly focusing on template-sample comparison algorithms. These algorithms often struggle with background features in templates and samples that exhibit spatial variations, including translation and rotation. The inherent spatial equivariance in CNN -based algorithms complicates the elimination of noise attributed to these spatial variations. To address this issue, we developed the Background Generalization Networks (BGNet). BGNet effectively reduces spatial variation noise by subtracting background features of samples and templates based on their matching relationships. It starts by extracting dense features rich in global and interactive information via a Siamese network and then applies self- and cross -attention mechanisms from Transformers. The matching score is calculated based on feature similarity, with matching relations established using the Mutual Nearest Neighbour (MNN) algorithm. These relations enable us to mitigate the noise caused by spatial variations and implement a multiscale fusion of detailed and semantic information, leading to more accurate segmentation results. Our experiments on OCDs and PCBs have shown that BGNet surpasses existing state -of -the -art methods in terms of performance. Furthermore, the code for this work is available on GitHub: https://github.com/Max-Chenb/BG-Net.
引用
收藏
页数:12
相关论文
共 53 条
[1]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[2]   TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers [J].
Bai, Xuyang ;
Hu, Zeyu ;
Zhu, Xinge ;
Huang, Qingqiu ;
Chen, Yilun ;
Fu, Hangbo ;
Tai, Chiew-Lan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1080-1089
[3]   Triplet-Graph Reasoning Network for Few-Shot Metal Generic Surface Defect Segmentation [J].
Bao, Yanqi ;
Song, Kechen ;
Liu, Jie ;
Wang, Yanyan ;
Yan, Yunhui ;
Yu, Han ;
Li, Xingjie .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[4]   BRIEF: Binary Robust Independent Elementary Features [J].
Calonder, Michael ;
Lepetit, Vincent ;
Strecha, Christoph ;
Fua, Pascal .
COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 :778-792
[5]  
Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
[6]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[7]   SuperPoint: Self-Supervised Interest Point Detection and Description [J].
DeTone, Daniel ;
Malisiewicz, Tomasz ;
Rabinovich, Andrew .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :337-349
[8]   PGA-Net: Pyramid Feature Fusion and Global Context Attention Network for Automated Surface Defect Detection [J].
Dong, Hongwen ;
Song, Kechen ;
He, Yu ;
Xu, Jing ;
Yan, Yunhui ;
Meng, Qinggang .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (12) :7448-7458
[9]   Automatic aerospace weld inspection using unsupervised local deep feature learning [J].
Dong, Xinghui ;
Taylor, Chris J. ;
Cootes, Tim F. .
KNOWLEDGE-BASED SYSTEMS, 2021, 221
[10]   D2-Net: A Trainable CNN for Joint Description and Detection of Local Features [J].
Dusmanu, Mihai ;
Rocco, Ignacio ;
Pajdla, Tomas ;
Pollefeys, Marc ;
Sivic, Josef ;
Torii, Akihiko ;
Sattler, Torsten .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8084-8093