Deep Multi-Modal U-Net Fusion Methodology of Thermal and Ultrasonic Images for Porosity Detection in Additive Manufacturing

被引:7
|
作者
Zamiela, Christian [1 ]
Jiang, Zhipeng [2 ]
Stokes, Ryan [3 ]
Tian, Zhenhua [4 ]
Netchaev, Anton [5 ]
Dickerson, Charles [5 ]
Tian, Wenmeng [1 ]
Bian, Linkan [1 ]
机构
[1] Mississippi State Univ, Ctr Adv Vehicular Syst CAVS, Dept Ind & Syst Engn, Mississippi, MS 39762 USA
[2] Mississippi State Univ, Ctr Adv Vehicular Syst CAVS, Dept Aerosp Engn, Mississippi, MS 39762 USA
[3] Mississippi State Univ, Ctr Adv Vehicular Syst CAVS, Dept Mech Engn, Mississippi, MS 39762 USA
[4] Virginia Tech, Dept Mech Engn, Blacksburg, VA 24061 USA
[5] US Army Engineer Res & Dev Ctr ERDC, Informat Technol Lab, Vicksburg, MS 39180 USA
来源
JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING-TRANSACTIONS OF THE ASME | 2023年 / 145卷 / 06期
关键词
additive manufacturing; sensor fusion; porosity detection; thermal sensing??????; ultrasonic sensing; inspection and quality control; laser processes; nondestructive; sensing; monitoring and diagnostics; PREDICTION;
D O I
10.1115/1.4056873
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We developed a deep fusion methodology of nondestructive in-situ thermal and ex-situ ultrasonic images for porosity detection in laser-based additive manufacturing (LBAM). A core challenge with the LBAM is the lack of fusion between successive layers of printed metal. Ultrasonic imaging can capture structural abnormalities by passing waves through successive layers. Alternatively, in-situ thermal images track the thermal history during fabrication. The proposed sensor fusion U-Net methodology fills the gap in fusing in-situ images with ex-situ images by employing a two-branch convolutional neural network (CNN) for feature extraction and segmentation to produce a 2D image of porosity. We modify the U-Net framework with the inception and long short term memory (LSTM) blocks. We validate the models by comparing our single modality models and fusion models with ground truth X-ray computed tomography (XCT) images. The inception U-Net fusion model achieved the highest mean intersection over union score of 0.93.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Deep residual U-Net for automatic detection of Moroccan coastal upwelling using SST images
    Mohamed Snoussi
    Ayoub Tamim
    Salma El Fellah
    Mohamed El Ansari
    Multimedia Tools and Applications, 2023, 82 : 7491 - 7507
  • [32] Multi-modal voice pathology detection architecture based on deep and handcrafted feature fusion
    Omeroglu, Asli Nur
    Mohammed, Hussein M. A.
    Oral, Emin Argun
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2022, 36
  • [33] An Abnormal Behavior Detection Method Leveraging Multi-modal Data Fusion and Deep Mining
    Tian, Xinyu
    Zheng, Qinghe
    Jiang, Nan
    IAENG International Journal of Applied Mathematics, 2021, 51 (01)
  • [34] EFASPP U-Net for semantic segmentation of night traffic scenes using fusion of visible and thermal images
    Shojaiee, Faegheh
    Baleghi, Yasser
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 117
  • [35] NVTrans-UNet: Neighborhood vision transformer based U-Net for multi-modal cardiac MR image segmentation
    Li, Bingjie
    Yang, Tiejun
    Zhao, Xiang
    JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, 2023, 24 (03):
  • [36] Change Detection on Multi-Spectral Images Based on Feature-level U-Net
    Wiratama, Wahyu
    Lee, Jongseok
    Sim, Donggyu
    IEEE ACCESS, 2020, 8 : 12279 - 12289
  • [37] Malignancy Detection in Prostate Multi-Parametric MR Images Using U-net with Attention
    Machireddy, Archana
    Meermeier, Nicholas
    Coakley, Fergus
    Song, Xubo
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 1520 - 1523
  • [38] A MULTI-MODAL DATA-DRIVEN DECISION FUSION METHOD FOR PROCESS MONITORING IN METAL POWDER BED FUSION ADDITIVE MANUFACTURING
    Yang, Zhuo
    Kim, Jaehyuk
    Lu, Yan
    Yeung, Ho
    Lane, Brandon
    Jones, Albert
    Ndiaye, Yande
    PROCEEDINGS OF 2022 INTERNATIONAL ADDITIVE MANUFACTURING CONFERENCE, IAM2022, 2022,
  • [39] VAM-Net: Vegetation-Attentive deep network for Multi-modal fusion of visible-light and vegetation-sensitive images
    Zang, Yufu
    Wang, Shuye
    Guan, Haiyan
    Peng, Daifeng
    Chen, Jike
    Chen, Yanming
    Delavar, Mahmoud R.
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [40] Ultrasound Multi-needle Detection Using Deep Attention U-Net with TV Regularizations
    Zhang, Yupei
    Lei, Yang
    He, Xiuxiu
    Tian, Zhen
    Jeong, Jiwoong
    Wang, Tonghe
    Zeng, Qiulan
    Jani, Ashesh B.
    Curran, Walter
    Patel, Pretesh
    Liu, Tian
    Yang, Xiaofeng
    MEDICAL IMAGING 2021: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, 2021, 11598