Toward a unified framework for interpreting machine-learning models in neuroimaging

被引:95
|
作者
Kohoutova, Lada [1 ,2 ]
Heo, Juyeon [3 ]
Cha, Sungmin [3 ]
Lee, Sungwoo [1 ,2 ]
Moon, Taesup [3 ]
Wager, Tor D. [4 ,5 ,6 ]
Woo, Choong-Wan [1 ,2 ]
机构
[1] Inst Basic Sci, Ctr Neurosci Imaging Res, Suwon, South Korea
[2] Sungkyunkwan Univ, Dept Biomed Engn, Suwon, South Korea
[3] Sungkyunkwan Univ, Dept Elect & Comp Engn, Suwon, South Korea
[4] Dartmouth Coll, Dept Psychol & Brain Sci, Hanover, NH 03755 USA
[5] Univ Colorado, Dept Psychol & Neurosci, Boulder, CO 80309 USA
[6] Univ Colorado, Inst Cognit Sci, Boulder, CO 80309 USA
基金
新加坡国家研究基金会;
关键词
PRINCIPAL-COMPONENTS; BRAIN SIGNATURES; PATTERN-ANALYSIS; HUMAN STRIATUM; TEMPORAL-LOBE; FMRI; PAIN; SELECTION; REPRESENTATIONS; CLASSIFICATION;
D O I
10.1038/s41596-019-0289-5
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Machine learning is a powerful tool for creating computational models relating brain function to behavior, and its use is becoming widespread in neuroscience. However, these models are complex and often hard to interpret, making it difficult to evaluate their neuroscientific validity and contribution to understanding the brain. For neuroimaging-based machine-learning models to be interpretable, they should (i) be comprehensible to humans, (ii) provide useful information about what mental or behavioral constructs are represented in particular brain pathways or regions, and (iii) demonstrate that they are based on relevant neurobiological signal, not artifacts or confounds. In this protocol, we introduce a unified framework that consists of model-, feature- and biology-level assessments to provide complementary results that support the understanding of how and why a model works. Although the framework can be applied to different types of models and data, this protocol provides practical tools and examples of selected analysis methods for a functional MRI dataset and multivariate pattern-based predictive models. A user of the protocol should be familiar with basic programming in MATLAB or Python. This protocol will help build more interpretable neuroimaging-based machine-learning models, contributing to the cumulative understanding of brain mechanisms and brain health. Although the analyses provided here constitute a limited set of tests and take a few hours to days to complete, depending on the size of data and available computational resources, we envision the process of annotating and interpreting models as an open-ended process, involving collaborative efforts across multiple studies and laboratories. Neuroimaging-based machine-learning models should be interpretable to neuroscientists and users in applied settings. This protocol describes how to assess the interpretability of models based on fMRI.
引用
收藏
页码:1399 / 1435
页数:37
相关论文
共 50 条
  • [1] Toward a unified framework for interpreting machine-learning models in neuroimaging
    Lada Kohoutová
    Juyeon Heo
    Sungmin Cha
    Sungwoo Lee
    Taesup Moon
    Tor D. Wager
    Choong-Wan Woo
    Nature Protocols, 2020, 15 : 1399 - 1435
  • [2] Machine-OIF-Action: a unified framework for developing and interpreting machine-learning models for chemosensory research
    Gupta, Anku
    Choudhary, Mohit
    Mohanty, Sanjay Kumar
    Mittal, Aayushi
    Gupta, Krishan
    Arya, Aditya
    Kumar, Suvendu
    Katyayan, Nikhil
    Dixit, Nilesh Kumar
    Kalra, Siddhant
    Goel, Manshi
    Sahni, Megha
    Singhal, Vrinda
    Mishra, Tripti
    Sengupta, Debarka
    Ahuja, Gaurav
    BIOINFORMATICS, 2021, 37 (12) : 1769 - 1771
  • [3] Toward a Unified Framework for Interpreting the Phase Rule
    Ravi, R.
    INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH, 2012, 51 (42) : 13853 - 13861
  • [4] A machine-learning framework for peridynamic material models with physical constraints
    Xu, Xiao
    D'Elia, Marta
    Foster, John T.
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2021, 386
  • [5] Deep Forest as a framework for a new class of machine-learning models
    Utkin, Lev V.
    Meldo, Anna A.
    Konstantinov, Andrei V.
    NATIONAL SCIENCE REVIEW, 2019, 6 (02) : 186 - 187
  • [6] Deep Forest as a framework for a new class of machine-learning models
    Lev V.Utkin
    Anna A.Meldo
    Andrei V.Konstantinov
    NationalScienceReview, 2019, 6 (02) : 186 - 187
  • [7] Interpreting and Stabilizing Machine-Learning Parametrizations of Convection
    Brenowitz, Noah D.
    Beucler, Tom
    Pritchard, Michael
    Bretherton, Christopher S.
    JOURNAL OF THE ATMOSPHERIC SCIENCES, 2020, 77 (12) : 4357 - 4375
  • [8] Interpreting Deep Learning Models for Multimodal Neuroimaging
    Mueller, K. R.
    Hofmann, S. M.
    2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,
  • [9] Certified Machine-Learning Models
    Damiani, Ernesto
    Ardagna, Claudio A.
    SOFSEM 2020: THEORY AND PRACTICE OF COMPUTER SCIENCE, 2020, 12011 : 3 - 15
  • [10] Defining "Better Prediction" by Machine-Learning Models Toward Clinical Application
    Hamaya, Rikuta
    Sahashi, Yuki
    Kagiyama, Nobuyuki
    JACC-CARDIOVASCULAR IMAGING, 2022, 15 (03) : 550 - 550