Physiological signal analysis using explainable artificial intelligence: A systematic review

被引:0
|
作者
Shen, Jian [1 ]
Wu, Jinwen [1 ]
Liang, Huajian [1 ]
Zhao, Zeguang [1 ]
Li, Kunlin [2 ]
Zhu, Kexin [1 ]
Wang, Kang [1 ]
Ma, Yu [1 ]
Hu, Wenbo [1 ]
Guo, Chenxu [1 ]
Zhang, Yanan [1 ]
Hu, Bin [1 ]
机构
[1] Beijing Inst Technol, Sch Med Technol, Beijing 100081, Peoples R China
[2] Hebei Univ, Sch Elect Informat Engn, Baoding 071000, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Physiological signals; Artificial intelligence; Interpretable modeling; Medical and health; COMPUTER-AIDED DETECTION; EMOTION RECOGNITION; CONVOLUTIONAL TRANSFORMER; DEPRESSION RECOGNITION; ECG SIGNALS; EEG; DATABASE; SLEEP; ELECTROENCEPHALOGRAPHY; PREDICTION;
D O I
10.1016/j.neucom.2024.128920
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the continuous development of wearable sensors, it has become increasingly convenient to collect various physiological signals from the human body. The combination of Artificial Intelligence (AI) technology and various physiological signals has significantly improved people's awareness of their psychological and physiological states, thus promoting substantial progress in the medical and health industries. However, most current research on physiological signal modeling does not consider the issue of interpretability, which poses a significant challenge for clinical diagnosis and treatment support. Interpretability refers to the explanation of the internal workings of AI models when generating decision results and is regarded as an important foundation for understanding model operations. Despite substantial progress made in this field in recent years, there remains a lack of systematic discussion regarding interpretable AI in physiological signal modeling, resulting in researchers having difficulty comprehensively grasping the latest developments and emerging trends in the field. Therefore, this paper provides a systematic review of interpretable AI technologies in the domain of physiological signals. Based on the scope of interpretability, these technologies are divided into two categories: global and local interpretability, and we conduct an in-depth analysis and comparison of these two types of technologies. Subsequently, we explore the potential applications of interpretable physiological signal modeling in areas such as medicine and healthcare. Finally, we summarize the key challenges of interpretable AI in the context of physiological signals and discuss future research directions. This study aims to provide researchers with a systematic framework to better understand and apply interpretable AI technologies and lay the foundation for future research.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence in Radiotherapy: A Systematic review
    Heising, Luca M.
    Wolfs, Cecile J. A.
    Jacobs, Maria J. A.
    Verhaegen, Frank
    Ou, Carol X. J.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S4444 - S4446
  • [2] Explainable Artificial Intelligence in the Medical Domain: A Systematic Review
    Chakrobartty, Shuvro
    El-Gayar, Omar
    DIGITAL INNOVATION AND ENTREPRENEURSHIP (AMCIS 2021), 2021,
  • [3] Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review
    Wyatt, Lucie S.
    van Karnenbeek, Lennard M.
    Wijkhuizen, Mark
    Geldof, Freija
    Dashtbozorg, Behdad
    APPLIED SCIENCES-BASEL, 2024, 14 (18):
  • [4] Explainable artificial intelligence (XAI) in finance: a systematic literature review
    Cerneviciene, Jurgita
    Kabasinskas, Audrius
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [5] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [6] Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
    Frasca M.
    La Torre D.
    Pravettoni G.
    Cutica I.
    Discov. Artif. Intell., 2024, 1 (1):
  • [7] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D. D.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [8] Explainable artificial intelligence in skin cancer recognition: A systematic review
    Hauser, Katja
    Kurz, Alexander
    Haggenmueller, Sarah
    Maron, Roman C.
    von Kalle, Christof
    Utikal, Jochen S.
    Meier, Friedegund
    Hobelsberger, Sarah
    Gellrich, Frank F.
    Sergon, Mildred
    Hauschild, Axel
    French, Lars E.
    Heinzerling, Lucie
    Schlager, Justin G.
    Ghoreschi, Kamran
    Schlaak, Max
    Hilke, Franz J.
    Poch, Gabriela
    Kutzner, Heinz
    Berking, Carola
    Heppt, Markus, V
    Erdmann, Michael
    Haferkamp, Sebastian
    Schadendorf, Dirk
    Sondermann, Wiebke
    Goebeler, Matthias
    Schilling, Bastian
    Kather, Jakob N.
    Froehling, Stefan
    Lipka, Daniel B.
    Hekler, Achim
    Krieghoff-Henning, Eva
    Brinker, Titus J.
    EUROPEAN JOURNAL OF CANCER, 2022, 167 : 54 - 69
  • [9] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [10] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584