A self-supervised learning framework based on masked autoencoder for complex wafer bin map classification

被引:2
|
作者
Wang, Yi [1 ]
Ni, Dong [1 ]
Huang, Zhenyu [2 ]
Chen, Puyang [2 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, Hangzhou 310027, Peoples R China
[2] Intel Corp, Dalian 116630, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; Masked autoencoder; Complex wafer bin map; Automatic defect classification; Semiconductor manufacturing; DEFECT PATTERNS; DEEP; NETWORKS;
D O I
10.1016/j.eswa.2024.123601
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Wafer bin map (WBM) automatic classification is one of the critical challenges for semiconductor intelligent manufacturing. Many deep learning -based classification models have performed well in WBM classification, but all require a large amount of labeled data for training. Since real -world WBMs are highly complex and can be labeled correctly only by seasoned engineers, such requirements undermine the practical value of those methods. Several self -supervised learning methods have recently been proposed for WBM to improve classification performance. However, they still require much labeled data for fine-tuning and are only adapted for binary WBM with a single gross failure area. To address these limitations, this study introduces a selfsupervised framework based on masked autoencoder (MAE) for complex WBMs with mixed bin signatures and multiple gross failure area patterns. A patchMC encoder is proposed to improve MAE's representation ability for complex WBMs with mixed bin signatures. Moreover, the pre -trained MAE encoder with a multilabel classifier fine-tuned by labeled WBMs enables a few -shot classification of complex WBMs with multiple gross failure areas. Experimental validation of the proposed method is performed on a real -world complex WBM dataset from Intel Corporation. The results demonstrate that the proposed method can make good use of unlabeled WBMs and reduce the demand for labeled data to a few -shot level and, at the same time, guarantees a classification accuracy of more than 90%. By comparing MAE with other self -supervised learning methods, MAE outperforms other existing self -supervised methods for WBM data.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Mixed Autoencoder for Self-supervised Visual Representation Learning
    Chen, Kai
    Liu, Zhili
    Hong, Lanqing
    Xu, Hang
    Li, Zhenguo
    Yeung, Dit-Yan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22742 - 22751
  • [32] Self-supervised Discriminative Representation Learning by Fuzzy Autoencoder
    Yang, Wenlu
    Wang, Hongjun
    Zhang, Yinghui
    Liu, Zehao
    Li, Tianrui
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (01)
  • [33] Dynamic Clustering for Wafer Map Patterns Using Self-Supervised Learning on Convolutional Autoencoders
    Kim, Donghwa
    Kang, Pilsung
    IEEE TRANSACTIONS ON SEMICONDUCTOR MANUFACTURING, 2021, 34 (04) : 444 - 454
  • [34] Masked Discrimination for Self-supervised Learning on Point Clouds
    Liu, Haotian
    Cai, Mu
    Lee, Yong Jae
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 657 - 675
  • [35] SelfSwapper: Self-supervised Face Swapping via Shape Agnostic Masked AutoEncoder
    Lee, Jaeseong
    Hyung, Junha
    Jung, Sohyun
    Choo, Jaegul
    COMPUTER VISION - ECCV 2024, PT LV, 2025, 15113 : 383 - 400
  • [36] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [37] Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds
    Hess, Georg
    Jaxing, Johan
    Svensson, Elias
    Hagerman, David
    Petersson, Christoffer
    Svensson, Lennart
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 350 - 359
  • [38] Masked autoencoder: influence of self-supervised pretraining on object segmentation in industrial images
    Anja Witte
    Sascha Lange
    Christian Lins
    Industrial Artificial Intelligence, 2 (1):
  • [39] PointUR-RL: Unified Self-Supervised Learning Method Based on Variable Masked Autoencoder for Point Cloud Reconstruction and Representation Learning
    Li, Kang
    Zhu, Qiuquan
    Wang, Haoyu
    Wang, Shibo
    Tian, He
    Zhou, Ping
    Cao, Xin
    REMOTE SENSING, 2024, 16 (16)
  • [40] iSSL-AL: a deep active learning framework based on self-supervised learning for image classification
    Agha, Rand
    Mustafa, Ahmad M.
    Abuein, Qusai
    Neural Computing and Applications, 2024, 36 (28) : 17699 - 17713