Predictive and Explainable Artificial Intelligence for Neuroimaging Applications

被引:0
|
作者
Lee, Sekwang [1 ]
Lee, Kwang-Sig [2 ]
机构
[1] Korea Univ, Anam Hosp, Coll Med, Dept Phys Med & Rehabil, Seoul 02841, South Korea
[2] Korea Univ, Anam Hosp, Coll Med, AI Ctr, Seoul 02841, South Korea
关键词
neuroimaging; predictive artificial intelligence; explainable artificial intelligence;
D O I
10.3390/diagnostics14212394
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: The aim of this review is to highlight the new advance of predictive and explainable artificial intelligence for neuroimaging applications. Methods: Data came from 30 original studies in PubMed with the following search terms: "neuroimaging" (title) together with "machine learning" (title) or "deep learning" (title). The 30 original studies were eligible according to the following criteria: the participants with the dependent variable of brain image or associated disease; the interventions/comparisons of artificial intelligence; the outcomes of accuracy, the area under the curve (AUC), and/or variable importance; the publication year of 2019 or later; and the publication language of English. Results: The performance outcomes reported were within 58-96 for accuracy (%), 66-97 for sensitivity (%), 76-98 for specificity (%), and 70-98 for the AUC (%). The support vector machine and the convolutional neural network registered the best performance (AUC 98%) for the classifications of low- vs. high-grade glioma and brain conditions, respectively. Likewise, the random forest delivered the best performance (root mean square error 1) for the regression of brain conditions. The following factors were discovered to be major predictors of brain image or associated disease: (demographic) age, education, sex; (health-related) alpha desynchronization, Alzheimer's disease stage, CD4, depression, distress, mild behavioral impairment, RNA sequencing; (neuroimaging) abnormal amyloid-beta, amplitude of low-frequency fluctuation, cortical thickness, functional connectivity, fractal dimension measure, gray matter volume, left amygdala activity, left hippocampal volume, plasma neurofilament light, right cerebellum, regional homogeneity, right middle occipital gyrus, surface area, sub-cortical volume. Conclusion: Predictive and explainable artificial intelligence provide an effective, non-invasive decision support system for neuroimaging applications.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Explainable Artificial Intelligence in education
    Khosravi H.
    Shum S.B.
    Chen G.
    Conati C.
    Tsai Y.-S.
    Kay J.
    Knight S.
    Martinez-Maldonado R.
    Sadiq S.
    Gašević D.
    Computers and Education: Artificial Intelligence, 2022, 3
  • [22] On the Need of an Explainable Artificial Intelligence
    Zanni-Merk, Cecilia
    INFORMATION SYSTEMS ARCHITECTURE AND TECHNOLOGY, ISAT 2019, PT I, 2020, 1050 : 3 - 3
  • [23] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139
  • [24] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [25] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [26] Explainable and responsible artificial intelligence
    Christian Meske
    Babak Abedin
    Mathias Klier
    Fethi Rabhi
    Electronic Markets, 2022, 32 : 2103 - 2106
  • [27] Explainable artificial intelligence in ophthalmology
    Tan, Ting Fang
    Dai, Peilun
    Zhang, Xiaoman
    Jin, Liyuan
    Poh, Stanley
    Hong, Dylan
    Lim, Joshua
    Lim, Gilbert
    Teo, Zhen Ling
    Liu, Nan
    Ting, Daniel Shu Wei
    CURRENT OPINION IN OPHTHALMOLOGY, 2023, 34 (05) : 422 - 430
  • [28] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584
  • [29] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [30] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215