Multi-View Multi-Label Fine-Grained Emotion Decoding From Human Brain Activity

被引:8
作者
Fu, Kaicheng [1 ,2 ]
Du, Changde [1 ]
Wang, Shengpei [1 ]
He, Huiguang [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Res Ctr Brain Inspired Intelligence, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Decoding; Brain modeling; Functional magnetic resonance imaging; Predictive models; Emotion recognition; Dimensionality reduction; Pattern recognition; Fine-grained emotion decoding; multi-label learning; multi-view learning; product of experts (PoEs); variational autoencoder; REPRESENTATION; PARCELLATION; CATEGORIES;
D O I
10.1109/TNNLS.2022.3217767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decoding emotional states from human brain activity play an important role in the brain-computer interfaces. Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of humans; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of the human brain. In this article, we propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion categories) which can learn the expressive neural representations and predict multiple emotional states simultaneously. Specifically, the generative component of our hybrid model is parameterized by a multi-view variational autoencoder, in which we regard the brain activity of left and right hemispheres and their difference as three distinct views and use the product of expert mechanism in its inference network. The discriminative component of our hybrid model is implemented by a multi-label classification network with an asymmetric focal loss. For more accurate emotion decoding, we first adopt a label-aware module for emotion-specific neural representation learning and then model the dependency of emotional states by a masked self-attention mechanism. Extensive experiments on two visually evoked emotional datasets show the superiority of our method.
引用
收藏
页码:9026 / 9040
页数:15
相关论文
共 50 条
[1]   EmoNet: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks [J].
Abdul-Mageed, Muhammad ;
Ungar, Lyle .
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, :718-728
[2]   Decoding the neural representation of affective states [J].
Baucom, Laura B. ;
Wedell, Douglas H. ;
Wang, Jing ;
Blitzer, David N. ;
Shinkareva, Svetlana V. .
NEUROIMAGE, 2012, 59 (01) :718-727
[3]  
Beliy R, 2019, ADV NEUR IN, V32
[4]  
Ben-Baruch E., 2020, ARXIV
[5]  
Bowman S. R., 2016, P 20 SIGNLL C COMP N, DOI 10.18653/v1/K16-1002
[6]  
Cao Y., 2014, arXiv
[7]  
Chen ZS, 2020, AAAI CONF ARTIF INTE, V34, P3553
[8]   Learning Graph Convolutional Networks for Multi-Label Recognition and Applications [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) :6969-6983
[9]   Multi-Label Image Recognition with Graph Convolutional Networks [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5172-5181
[10]   Self-report captures 27 distinct categories of emotion bridged by continuous gradients [J].
Cowen, Alan S. ;
Keltner, Dacher .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (38) :E7900-E7909