Enhanced SpO2 estimation using explainable machine learning and neck photoplethysmography

被引:2
|
作者
Zhong, Yuhao [1 ]
Jatav, Ashish [1 ]
Afrin, Kahkashan [1 ]
Shivaram, Tejaswini [2 ]
Bukkapatnam, Satish T. S. [1 ]
机构
[1] Texas A&M Univ, Wm Michael Barnes 64 Dept Ind & Syst Engn, College Stn, TX 77840 USA
[2] Texas A&M Univ, Dept Elect & Comp Engn, College Stn, TX 77840 USA
基金
美国国家科学基金会;
关键词
Explainable machine learning; Neck reflectance photoplethysmogram (PPG); Subject heterogeneity; Subject inclusion-exclusion criteria; SpO(2 )estimation;
D O I
10.1016/j.artmed.2023.102685
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reflectance-based photoplethysmogram (PPG) sensors provide flexible options of measuring sites for blood oxygen saturation (SpO(2)) measurement. But they are mostly limited by accuracy, especially when applied to different subjects, due to the diverse human characteristics (skin colors, hair density, etc.) and usage conditions of different sensor settings. This study addresses the estimation of SpO(2) at non-standard measuring sites employing reflectance-based sensors. It proposes an automated construction of subject inclusion-exclusion criteria for SpO(2) measuring devices, using a combination of unsupervised clustering, supervised regression, and model explanations. This is perhaps among the first adaptation of SHAP to explain the clusters gleaned from unsupervised learning methods. As a wellness application case study, we developed a pillow-based wearable device to collect reflectance PPGs from both the brachiocephalic and carotid arteries around the neck. The experiment was conducted on 33 subjects, each under totally 80 different sensor settings. The proposed approach addressed the variations of humans and devices, as well as the heterogeneous mapping between signals and SpO(2) values. It identified effective device settings and characteristics of their applicable subject groups (i.e., subject inclusion-exclusion criteria). Overall, it reduced the root mean squared error (RMSE) by 16%, compared to an empirical formula and a plain SpO(2) estimation model.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Abnormal phenotypic defects detection of jujube using explainable machine learning enhanced computer vision
    Zhang, Luwei
    Chen, Yan
    Guo, Xiangyun
    Li, You
    Chen, Chao
    Zou, Yu
    Zhang, Xiaoshuan
    Hu, Jinyou
    JOURNAL OF FOOD SCIENCE, 2024, 89 (11) : 7713 - 7728
  • [2] Understanding cirrus clouds using explainable machine learning
    Jeggle, Kai
    Neubauer, David
    Camps-Valls, Gustau
    Lohmann, Ulrike
    ENVIRONMENTAL DATA SCIENCE, 2023, 2
  • [3] Leveraging explainable machine learning for enhanced management of lake water quality
    Hasani, Sajad Soleymani
    Arias, Mauricio E.
    Nguyen, Hung Q.
    Tarabih, Osama M.
    Welch, Zachariah
    Zhang, Qiong
    JOURNAL OF ENVIRONMENTAL MANAGEMENT, 2024, 370
  • [4] TruVR: Trustworthy Cybersickness Detection using Explainable Machine Learning
    Kundu, Ripan Kumar
    Islam, Rifatul
    Calyam, Prasad
    Hoque, Khaza Anuarul
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2022), 2022, : 777 - 786
  • [5] Examining nonlinearity in population inflow estimation using big data: An empirical comparison of explainable machine learning models
    Hu, Songhua
    Xiong, Chenfeng
    Chen, Peng
    Schonfeld, Paul
    TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2023, 174
  • [6] Automated Detection of Spectre and Meltdown Attacks using Explainable Machine Learning
    Pan, Zhixin
    Mishra, Prabhat
    2021 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), 2021, : 24 - 34
  • [7] Understanding predictions of drug profiles using explainable machine learning models
    Konig, Caroline
    Vellido, Alfredo
    BIODATA MINING, 2024, 17 (01):
  • [8] Hardware-Assisted Malware Detection and Localization Using Explainable Machine Learning
    Pan, Zhixin
    Sheldon, Jennifer
    Mishra, Prabhat
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (12) : 3308 - 3321
  • [9] Behavioral Analysis of Android Riskware Families Using Clustering and Explainable Machine Learning
    Alani, Mohammed M.
    Alawida, Moatsum
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (12)
  • [10] Using Explainable Machine Learning to Improve Intensive Care Unit Alarm Systems
    Gonzalez-Novoa, Jose A.
    Busto, Laura
    Rodriguez-Andina, Juan J.
    Farina, Jose
    Segura, Marta
    Gomez, Vanesa
    Vila, Dolores
    Veiga, Cesar
    SENSORS, 2021, 21 (21)