Interpreting Deep Learning Models for Multimodal Neuroimaging

被引:1
|
作者
Mueller, K. R. [1 ,2 ,3 ]
Hofmann, S. M. [1 ,4 ]
机构
[1] TU Berlin, Machine Learning Grp, Marchstr 23, D-10587 Berlin, Germany
[2] Korea Univ, Dept Artificial Intelligence, Seoul, South Korea
[3] Max Planck Inst Informat, Saarbrucken, Germany
[4] Max Planck Inst Human Cognit & Brain Sci, Stephanstr 1a, D-04103 Leipzig, Germany
来源
2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI | 2023年
关键词
NEURAL-NETWORKS; BRAIN; FUSION;
D O I
10.1109/BCI57258.2023.10078502
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurately analyzing both structural and functional brain data from multimodal neuroimaging is a challenge for deep learning methods. Recent progress in explainable AI (XAI) has helped to gain insight into structural relationships across brain regions and complex dynamics of cognitive states, both in healthy and diseased. In this brief note, we will touch upon selected recent directions of our research where machine learning techniques help to analyze brain measurements from EEG, fNIRS, sMRI and fMRI. We owe these steps that we are summarizing mainly to activities of members of the BBCI team and their collaborators. Clearly, unavoidably and intentionally this abstract will have a high overlap to prior own contributions as it reports about and discusses these novel ideas and directions.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Interpreting Deep Learning Models for Knowledge Tracing
    Yu Lu
    Deliang Wang
    Penghe Chen
    Qinggang Meng
    Shengquan Yu
    International Journal of Artificial Intelligence in Education, 2023, 33 : 519 - 542
  • [2] Interpreting Deep Learning Models for Knowledge Tracing
    Lu, Yu
    Wang, Deliang
    Chen, Penghe
    Meng, Qinggang
    Yu, Shengquan
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2023, 33 (03) : 519 - 542
  • [3] Interpreting deep learning models for weak lensing
    Matilla, Jose Manuel Zorrilla
    Sharma, Manasi
    Hsu, Daniel
    Haiman, Zoltan
    PHYSICAL REVIEW D, 2020, 102 (12)
  • [4] NeuralVis: Visualizing and Interpreting Deep Learning Models
    Zhang, Xufan
    Yin, Ziyue
    Feng, Yang
    Shi, Qingkai
    Liu, Jia
    Chen, Zhenyu
    34TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2019), 2019, : 1106 - 1109
  • [5] Interpreting weights of multimodal machine learning models—problems and pitfalls
    Nils Ralf Winter
    Janik Goltermann
    Udo Dannlowski
    Tim Hahn
    Neuropsychopharmacology, 2021, 46 : 1861 - 1862
  • [6] Toward a unified framework for interpreting machine-learning models in neuroimaging
    Kohoutova, Lada
    Heo, Juyeon
    Cha, Sungmin
    Lee, Sungwoo
    Moon, Taesup
    Wager, Tor D.
    Woo, Choong-Wan
    NATURE PROTOCOLS, 2020, 15 (04) : 1399 - 1435
  • [7] Toward a unified framework for interpreting machine-learning models in neuroimaging
    Lada Kohoutová
    Juyeon Heo
    Sungmin Cha
    Sungwoo Lee
    Taesup Moon
    Tor D. Wager
    Choong-Wan Woo
    Nature Protocols, 2020, 15 : 1399 - 1435
  • [8] Interpreting mental state decoding with deep learning models
    Thomas, Armin W.
    Re, Christopher
    Poldrack, Russell A.
    TRENDS IN COGNITIVE SCIENCES, 2022, 26 (11) : 972 - 986
  • [9] Interpreting weights of multimodal machine learning models-problems and pitfalls
    Winter, Nils Ralf
    Goltermann, Janik
    Dannlowski, Udo
    Hahn, Tim
    NEUROPSYCHOPHARMACOLOGY, 2021, 46 (11) : 1861 - 1862
  • [10] Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease
    Shi, Jun
    Zheng, Xiao
    Li, Yan
    Zhang, Qi
    Ying, Shihui
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2018, 22 (01) : 173 - 183