Multi-modal Predictive Models of Diabetes Progression

被引:6
|
作者
Ramazi, Ramin [1 ]
Perndorfer, Christine [1 ]
Soriano, Emily [1 ]
Laurenceau, Jean-Philippe [1 ]
Beheshti, Rahmatollah [1 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
关键词
Type; 2; diabetes; Continuous glucose monitoring; Activity trackers; Wearable medical devices; Recurrent neural networks; TYPE-1;
D O I
10.1145/3307339.3342177
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With the increasing availability of wearable devices, continuous monitoring of individuals' physiological and behavioral patterns has become significantly more accessible. Access to these continuous patterns about individuals' statuses offers an unprecedented opportunity for studying complex diseases and health conditions such as type 2 diabetes (T2D). T2D is a widely common chronic disease that its roots and progression patterns are not fully understood. Predicting the progression of T2D can inform timely and more effective interventions to prevent or manage the disease. In this study, we have used a dataset related to 63 patients with T2D that includes the data from two different types of wearable devices worn by the patients: continuous glucose monitoring (CGM) devices and activity trackers (ActiGraphs). Using this dataset, we created a model for predicting the levels of four major biomarkers related to T2D after a one-year period. We developed a wide and deep neural network and used the data from the demographic information, lab tests, and wearable sensors to create the model. The deep part of our method was developed based on the long short-term memory (LSTM) structure to process the time-series dataset collected by the wearables. In predicting the patterns of the four biomarkers, we have obtained a root mean square error of +/- 1.67% for HBA1c, +/- 6.22 mg/dl for HDL cholesterol, +/- 10.46 mg/dl for LDL cholesterol, and +/- 18.38 mg/dl for Triglyceride. Compared to existing models for studying T2D, our model offers a more comprehensive tool for combining a large variety of factors that contribute to the disease.
引用
收藏
页码:253 / 258
页数:6
相关论文
共 50 条
  • [41] Hierarchical Multi-Modal Prompting Transformer for Multi-Modal Long Document Classification
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6376 - 6390
  • [42] Nonparametric Bayesian Upstream Supervised Multi-Modal Topic Models
    Liao, Renjie
    Zhu, Jun
    Qin, Zengchang
    WSDM'14: PROCEEDINGS OF THE 7TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2014, : 493 - 502
  • [43] MULTI-MODAL BACKGROUND SUBTRACTION USING GAUSSIAN MIXTURE MODELS
    Langmann, Benjamin
    Ghobadi, Seyed E.
    Hartmann, Klaus
    Loffeld, Otmar
    PCV 2010 - PHOTOGRAMMETRIC COMPUTER VISION AND IMAGE ANALYSIS, PT I, 2010, 38 : 61 - 66
  • [44] MMA: Multi-Modal Adapter for Vision-Language Models
    Yang, Lingxiao
    Zhang, Ru-Yuan
    Wang, Yanchen
    Xie, Xiaohua
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23826 - +
  • [45] Spline regression models for complex multi-modal regulatory networks
    Ozmen, A.
    Kropat, E.
    Weber, G. -W.
    OPTIMIZATION METHODS & SOFTWARE, 2014, 29 (03): : 515 - 534
  • [46] Multi-Modal Estimation with Kernel Embeddings for Learning Motion Models
    McCalman, Lachlan
    O'Callaghan, Simon
    Ramos, Fabio
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 2845 - 2852
  • [47] Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning
    Sezerer, Erhan
    Tekir, Selma
    APPLIED SCIENCES-BASEL, 2021, 11 (17):
  • [48] Generative Multi-Modal Knowledge Retrieval with Large Language Models
    Long, Xinwei
    Zeng, Jiali
    Meng, Fandong
    Ma, Zhiyuan
    Zhang, Kaiyan
    Zhou, Bowen
    Zhou, Jie
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18733 - 18741
  • [49] Multi-Modal Generative Models for Learning Epistemic Active Sensing
    Korthals, Timo
    Rudolph, Daniel
    Leitner, Juergen
    Hesse, Marc
    Rueckert, Ulrich
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 3319 - 3325
  • [50] Possibilities of the Latest AI Models in Production – Multi-Modal Foundation Models in Production
    Behnen, H.
    Woltersmann, J.-H.
    Wolfschläger, D.
    Schmitt, R.H.
    WT Werkstattstechnik, 2024, 114 (11-12): : 747 - 754