Multi-modal sensor fusion with machine learning for data-driven process monitoring for additive manufacturing

被引:69
|
作者
Petrich, Jan [1 ]
Snow, Zack [1 ]
Corbin, David [1 ]
Reutzel, Edward W. [1 ]
机构
[1] Penn State Univ, Appl Res Lab, University Pk, PA 16804 USA
关键词
LASER; EMISSION; QUALITY;
D O I
10.1016/j.addma.2021.102364
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper presents a complete concept and validation scheme for potential inter-layer flaw detection from in-situ process monitoring for powder bed fusion additive manufacturing (PBFAM) using supervised machine learning. Specifically, the presented work establishes a meaningful statistical correlation between (i) the multi-modal sensor footprint acquired during the build process, and (ii) the existence of flaws as indicated by post-build X-ray Computed Tomography (CT) scans. Multiple sensor modalities, such as layerwise imagery (both pre and post laser scan), acoustic and multi-spectral emissions, and information derived from the scan vector trajectories, contribute to the process footprint. Data registration techniques to properly merge spatial and temporal information are presented in detail. As a proofof-concept, a neural network is used to fuse all available modalities, and discriminate flaws from nominal build conditions using only in-situ data. Experimental validation was carried out using a PBFAM sensor testbed available at PSU/ARL. Using four-fold cross-validation on a voxel-by-voxel basis, the null hypothesis, i.e. absence of a defect, was rejected at a rate corresponding to 98.5% accuracy for binary classification. Additionally, a sensitivity study was conducted to assess the information content contributed by the individual sensor modalities. Information content was assessed by evaluating the resulting correlation as classification performance when using only a single modality or a subset of modalities. Although optical imagery contains the highest amount of information for flaw detection, additional information content observed in other modalities significantly improved classification performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] On Multi-modal Fusion Learning in constraint propagation
    Li, Yaoyi
    Lu, Hongtao
    INFORMATION SCIENCES, 2018, 462 : 204 - 217
  • [32] Advanced data-driven FBG sensor-based pavement monitoring system using multi-sensor data fusion and an unsupervised learning approach
    Golmohammadi, Ali
    Hernando, David
    van den Bergh, Wim
    Hasheminejad, Navid
    MEASUREMENT, 2025, 242
  • [33] MACHINE LEARNING TECHNIQUES FOR ACOUSTIC DATA PROCESSING IN ADDITIVE MANUFACTURING IN SITU PROCESS MONITORING A REVIEW
    Taheri, Hossein
    Zafar, Suhaib
    MATERIALS EVALUATION, 2023, 81 (07) : 50 - 60
  • [34] Multi-modal mobile sensor data fusion for autonomous robot mapping problem
    Kassem, M. H.
    Shehata, Omar M.
    Morgan, E. I. Imam
    2015 3RD INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION (ICCMA 2015), 2016, 42
  • [35] Human Behavior Recognition Algorithm Based on Multi-Modal Sensor Data Fusion
    Zheng, Dingchao
    Chen, Caiwei
    Yu, Jianzhe
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2025, 29 (02) : 287 - 305
  • [36] Issues in Multi-Valued Multi-Modal Sensor Fusion
    Janidarmian, Majid
    Zilic, Zeljko
    Radecka, Katarzyna
    2012 42ND IEEE INTERNATIONAL SYMPOSIUM ON MULTIPLE-VALUED LOGIC (ISMVL), 2012, : 238 - 243
  • [37] ATTENTION DRIVEN FUSION FOR MULTI-MODAL EMOTION RECOGNITION
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3227 - 3231
  • [38] Design of a Multi-sensor Monitoring System for Additive Manufacturing Process
    Peng X.
    Kong L.
    Chen Y.
    Shan Z.
    Qi L.
    Kong, Lingbao (lkong@fudan.edu.cn), 1600, Springer Science and Business Media B.V. (03): : 142 - 150
  • [39] Attention driven multi-modal similarity learning
    Gao, Xinjian
    Mu, Tingting
    Goulermas, John Y.
    Wang, Meng
    INFORMATION SCIENCES, 2018, 432 : 530 - 542
  • [40] Multi-Modal Data Fusion for Big Events
    Papacharalapous, A. E.
    Hovelynck, Stefan
    Cats, O.
    Lankhaar, J. W.
    Daamen, W.
    van Oort, N.
    van Lint, J. W. C.
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (04) : 5 - 10