Multi-modal learning for inpatient length of stay prediction

被引:7
作者
Chen, Junde [1 ]
Wen, Yuxin [1 ]
Pokojovy, Michael [2 ]
Tseng, Tzu-Liang [3 ]
McCaffrey, Peter [4 ]
Vo, Alexander [4 ]
Walser, Eric [4 ]
Moen, Scott [4 ]
机构
[1] Chapman Univ, Dale E & Sarah Ann Fowler Sch Engn, Orange, CA 92866 USA
[2] Old Dominion Univ, Dept Math & Stat, Norfolk, VA 23529 USA
[3] Univ Texas El Paso, Dept Ind Mfg & Syst Engn, El Paso, TX 79968 USA
[4] Univ Texas Med Branch, Galveston, TX 77550 USA
基金
美国国家科学基金会;
关键词
Chest X-ray images; Data -fusion model; Length of stay prediction; Multi -modal learning; HOSPITAL MORTALITY;
D O I
10.1016/j.compbiomed.2024.108121
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Predicting inpatient length of stay (LoS) is important for hospitals aiming to improve service efficiency and enhance management capabilities. Patient medical records are strongly associated with LoS. However, due to diverse modalities, heterogeneity, and complexity of data, it becomes challenging to effectively leverage these heterogeneous data to put forth a predictive model that can accurately predict LoS. To address the challenge, this study aims to establish a novel data-fusion model, termed as DF-Mdl, to integrate heterogeneous clinical data for predicting the LoS of inpatients between hospital discharge and admission. Multi-modal data such as demographic data, clinical notes, laboratory test results, and medical images are utilized in our proposed methodology with individual "basic" sub-models separately applied to each different data modality. Specifically, a convolutional neural network (CNN) model, which we termed CRXMDL, is designed for chest X-ray (CXR) image data, two long short-term memory networks are used to extract features from long text data, and a novel attention-embedded 1D convolutional neural network is developed to extract useful information from numerical data. Finally, these basic models are integrated to form a new data-fusion model (DF-Mdl) for inpatient LoS prediction. The proposed method attains the best R2 and EVAR values of 0.6039 and 0.6042 among competitors for the LoS prediction on the Medical Information Mart for Intensive Care (MIMIC)-IV test dataset. Empirical evidence suggests better performance compared with other state-of-the-art (SOTA) methods, which demonstrates the effectiveness and feasibility of the proposed approach.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] A deep learning approach for inpatient length of stay and mortality prediction
    Chen, Junde
    Di Qi, Trudi
    Vu, Jacqueline
    Wen, Yuxin
    JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 147
  • [2] M3T-LM: A multi-modal multi-task learning model for jointly predicting patient length of stay and mortality
    Chen, Junde
    Li, Qing
    Liu, Feng
    Wen, Yuxin
    Computers in Biology and Medicine, 2024, 183
  • [3] Dynamical User Intention Prediction via Multi-modal Learning
    Liu, Xuanwu
    Li, Zhao
    Mao, Yuanhui
    Lai, Lixiang
    Gao, Ben
    Deng, Yao
    Yu, Guoxian
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 519 - 535
  • [4] MulCPred: Learning Multi-Modal Concepts for Explainable Pedestrian Action Prediction
    Feng, Yan
    Carballo, Alexander
    Fujii, Keisuke
    Karlsson, Robin
    Ding, Ming
    Takeda, Kazuya
    SENSORS, 2024, 24 (20)
  • [5] Multi-modal sequence learning for Alzheimer's disease progression prediction with incomplete variable-length longitudinal data
    Xu, Lei
    Wu, Hui
    He, Chunming
    Wang, Jun
    Zhang, Changqing
    Nie, Feiping
    Chen, Lei
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [6] Herbal ingredient-target interaction prediction via multi-modal learning
    Liang, Xudong
    Lai, Guichuan
    Yu, Jintong
    Lin, Tao
    Wang, Chaochao
    Wang, Wei
    INFORMATION SCIENCES, 2025, 711
  • [7] Multi-Modal Learning-Based Equipment Fault Prediction in the Internet of Things
    Nan, Xin
    Zhang, Bo
    Liu, Changyou
    Gui, Zhenwen
    Yin, Xiaoyan
    SENSORS, 2022, 22 (18)
  • [8] Uncertainty-Aware Multi-modal Learning via Cross-Modal Random Network Prediction
    Wang, Hu
    Zhang, Jianpeng
    Chen, Yuanhong
    Ma, Congbo
    Avery, Jodie
    Hull, Louise
    Carneiro, Gustavo
    COMPUTER VISION, ECCV 2022, PT XXXVII, 2022, 13697 : 200 - 217
  • [9] Multi-modal Learning for WebAssembly Reverse Engineering
    Huang, Hanxian
    Zhao, Jishen
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 453 - 465
  • [10] Multi-Modal Curriculum Learning over Graphs
    Gong, Chen
    Yang, Jian
    Tao, Dacheng
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2019, 10 (04)