Extraction and Automatic Grouping of Joint and Individual Sources in Multisubject fMRI Data Using Higher Order Cumulants

被引:6
作者
Pakravan, Mansooreh [1 ]
Shamsollahi, Mohammad Bagher [1 ]
机构
[1] Sharif Univ Technol, Dept Elect Engn, Biomed Signal & Image Proc Lab, Tehran 113569363, Iran
关键词
Brain Signals; functional magnetic resonance imaging (fMRI); joint and individual source extraction; multi-subject data analysis; thin independent component analysis (Thin ICA); BLIND SOURCE SEPARATION; INDEPENDENT COMPONENT; BRAIN; ICA; VARIABILITY; ALGORITHMS; SIMULATION; PRECUNEUS; MODEL; GYRUS;
D O I
10.1109/JBHI.2018.2840085
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The joint analysis of multiple data sets to extract their interdependency information has wide applications in biomedical and health informatics. In this paper, we propose an algorithm to extract joint and individual sources of multisubject data sets by using a deflation-based procedure, which is referred to as joint/individual thin independent component analysis (JI-ThICA). The proposed algorithm is based on two cost functions utilizing higher order cumulants to extract joint and individual sources. Joint sources are discriminated by fusing signals of all subjects, whereas individual sources are extracted separately for each subject. Furthermore, JI-ThICA algorithm estimates the number of joint sources by applying a simple and efficient strategy to determine the type of sources (joint or individual). The algorithm also categorizes similar sources automatically across data sets through an optimization process. The proposed algorithm is evaluated by analyzing simulated functional magnetic resonance imaging (fMRI) multisubject data sets, and its performance is compared with existing alternatives. We investigate clean and noisy fMRI signals and consider two source models. Our results reveal that the proposed algorithm outperforms its alternatives in terms of the mean joint signal to interference ratio. We also apply the proposed algorithm on a public-available real fMRI multisubject data set, which was acquired during naturalistic auditory experience. The extracted results are in accordance with the previous studies on naturalistic audio listening and results of a recent study investigated this data set, which demonstrates that the JI-ThICA algorithm can be applied to extract reliable and meaningful information from multisubject fMRI data.
引用
收藏
页码:744 / 757
页数:14
相关论文
共 49 条
  • [1] Machine learning for neuroirnaging with scikit-learn
    Abraham, Alexandre
    Pedregosa, Fabian
    Eickenberg, Michael
    Gervais, Philippe
    Mueller, Andreas
    Kossaifi, Jean
    Gramfort, Alexandre
    Thirion, Bertrand
    Varoquaux, Gael
    [J]. FRONTIERS IN NEUROINFORMATICS, 2014, 8
  • [2] Stimulating the Brain's Language Network: Syntactic Ambiguity Resolution after TMS to the Inferior Frontal Gyrus and Middle Temporal Gyrus
    Acheson, Daniel J.
    Hagoort, Peter
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, 25 (10) : 1664 - 1677
  • [3] Diversity in Independent Component and Vector Analyses [Identifiability, algorithms, and applications in medical imaging]
    Adali, Tuelay
    Anderson, Matthew
    Fu, Geng-Shen
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2014, 31 (03) : 18 - 33
  • [4] NEW LOOK AT STATISTICAL-MODEL IDENTIFICATION
    AKAIKE, H
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1974, AC19 (06) : 716 - 723
  • [5] Capturing inter-subject variability with group independent component analysis of fMRI data: A simulation study
    Allen, Elena A.
    Erhardt, Erik B.
    Wei, Yonghua
    Eichele, Tom
    Calhoun, Vince D.
    [J]. NEUROIMAGE, 2012, 59 (04) : 4141 - 4159
  • [6] Anderson M, 2012, INT CONF ACOUST SPEE, P1885, DOI 10.1109/ICASSP.2012.6288271
  • [7] Joint Blind Source Separation With Multivariate Gaussian Model: Algorithms and Performance Analysis
    Anderson, Matthew
    Adali, Tuelay
    Li, Xi-Lin
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (04) : 1672 - 1683
  • [8] [Anonymous], ACOUST SPEECH SIG PR
  • [9] [Anonymous], ACOUST SPEECH SIG PR
  • [10] [Anonymous], P IEEE SENS ARR MULT