Multi-modal sensor fusion with machine learning for data-driven process monitoring for additive manufacturing

被引:69
|
作者
Petrich, Jan [1 ]
Snow, Zack [1 ]
Corbin, David [1 ]
Reutzel, Edward W. [1 ]
机构
[1] Penn State Univ, Appl Res Lab, University Pk, PA 16804 USA
关键词
LASER; EMISSION; QUALITY;
D O I
10.1016/j.addma.2021.102364
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper presents a complete concept and validation scheme for potential inter-layer flaw detection from in-situ process monitoring for powder bed fusion additive manufacturing (PBFAM) using supervised machine learning. Specifically, the presented work establishes a meaningful statistical correlation between (i) the multi-modal sensor footprint acquired during the build process, and (ii) the existence of flaws as indicated by post-build X-ray Computed Tomography (CT) scans. Multiple sensor modalities, such as layerwise imagery (both pre and post laser scan), acoustic and multi-spectral emissions, and information derived from the scan vector trajectories, contribute to the process footprint. Data registration techniques to properly merge spatial and temporal information are presented in detail. As a proofof-concept, a neural network is used to fuse all available modalities, and discriminate flaws from nominal build conditions using only in-situ data. Experimental validation was carried out using a PBFAM sensor testbed available at PSU/ARL. Using four-fold cross-validation on a voxel-by-voxel basis, the null hypothesis, i.e. absence of a defect, was rejected at a rate corresponding to 98.5% accuracy for binary classification. Additionally, a sensitivity study was conducted to assess the information content contributed by the individual sensor modalities. Information content was assessed by evaluating the resulting correlation as classification performance when using only a single modality or a subset of modalities. Although optical imagery contains the highest amount of information for flaw detection, additional information content observed in other modalities significantly improved classification performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Data-Driven Performance Evaluation Framework for Multi-Modal Public Transport Systems
    Rodriguez Gonzalez, Ana Belen
    Vinagre Diaz, Juan Jose
    Wilby, Mark R.
    Fernandez Pozo, Ruben
    SENSORS, 2022, 22 (01)
  • [22] Knowledge-Driven Subspace Fusion and Gradient Coordination for Multi-modal Learning
    Zhang, Yupei
    Wang, Xiaofei
    Meng, Fangliangzi
    Tang, Jin
    Li, Chao
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT IV, 2024, 15004 : 263 - 273
  • [23] Exploring emotions in Bach chorales: a multi-modal perceptual and data-driven study
    Parada-Cabaleiro, Emilia
    Batliner, Anton
    Zentner, Marcel
    Schedl, Markus
    ROYAL SOCIETY OPEN SCIENCE, 2023, 10 (12):
  • [24] A data-driven approach for evaluating multi-modal therapy in traumatic brain injury
    Haefeli, Jenny
    Ferguson, Adam R.
    Bingham, Deborah
    Orr, Adrienne
    Won, Seok Joon
    Lam, Tina I.
    Shi, Jian
    Hawley, Sarah
    Liu, Jialing
    Swanson, Raymond A.
    Massa, Stephen M.
    SCIENTIFIC REPORTS, 2017, 7
  • [25] Deep Multi-Modal Network Based Data-Driven Haptic Textures Modeling
    Joolee, Joolekha Bibi
    Jeon, Seokhee
    2021 IEEE WORLD HAPTICS CONFERENCE (WHC), 2021, : 1140 - 1140
  • [26] Data-driven staging of genetic frontotemporal dementia using multi-modal MRI
    McCarthy, Jillian
    Borroni, Barbara
    Sanchez-Valle, Raquel
    Moreno, Fermin
    Laforce, Robert, Jr.
    Graff, Caroline
    Synofzik, Matthis
    Galimberti, Daniela
    Rowe, James B.
    Masellis, Mario
    Tartaglia, Maria Carmela
    Finger, Elizabeth
    Vandenberghe, Rik
    de Mendonca, Alexandre
    Tagliavini, Fabrizio
    Santana, Isabel
    Butler, Chris
    Gerhard, Alex
    Danek, Adrian
    Levin, Johannes
    Otto, Markus
    Frisoni, Giovanni
    Ghidoni, Roberta
    Sorbi, Sandro
    Jiskoot, Lize C.
    Seelaar, Harro
    van Swieten, John C.
    Rohrer, Jonathan D.
    Iturria-Medina, Yasser
    Ducharme, Simon
    HUMAN BRAIN MAPPING, 2022, 43 (06) : 1821 - 1835
  • [27] A data-driven approach for evaluating multi-modal therapy in traumatic brain injury
    Jenny Haefeli
    Adam R. Ferguson
    Deborah Bingham
    Adrienne Orr
    Seok Joon Won
    Tina I. Lam
    Jian Shi
    Sarah Hawley
    Jialing Liu
    Raymond A. Swanson
    Stephen M. Massa
    Scientific Reports, 7
  • [28] A Self-supervised Framework for Improved Data-Driven Monitoring of Stress via Multi-modal Passive Sensing
    Fazeli, Shayan
    Levine, Lionel
    Beikzadeh, Mehrab
    Mirzasoleiman, Baharan
    Zadeh, Bita
    Peris, Tara
    Sarrafzadeh, Majid
    2023 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH, 2023, : 177 - 183
  • [29] Environment-dependent depth enhancement with multi-modal sensor fusion learning
    Takami, Kuya
    Lee, Taeyoung
    2018 SECOND IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2018, : 232 - 237
  • [30] Multi-modal Sensor Fusion for Learning Rich Models for Interacting Soft Robots
    Thuruthel, Thomas George
    Iida, Fumiya
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFT ROBOTICS, ROBOSOFT, 2023,