Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

被引:16
|
作者
Hollenstein, Nora [1 ]
Renggli, Cedric [2 ]
Glaus, Benjamin [2 ]
Barrett, Maria [3 ]
Troendle, Marius [4 ]
Langer, Nicolas [4 ]
Zhang, Ce [2 ]
机构
[1] Univ Copenhagen, Dept Nord Studies & Linguist, Copenhagen, Denmark
[2] Swiss Fed Inst Technol, Swiss Fed Inst Technol, Dept Comp Sci, Zurich, Switzerland
[3] IT Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
[4] Univ Zurich, Dept Psychol, Zurich, Switzerland
来源
FRONTIERS IN HUMAN NEUROSCIENCE | 2021年 / 15卷
关键词
EEG; natural language processing; frequency bands; brain activity; machine learning; multi-modal learning; physiological data; neural network; REGRESSION-BASED ESTIMATION; COGNITIVE NEUROSCIENCE; EYE-MOVEMENTS; THETA; SPEECH; NEUROBIOLOGY; OSCILLATIONS; RESPONSES; MODELS;
D O I
10.3389/fnhum.2021.659410
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity for this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial. We present a multi-modal machine learning architecture that learns jointly from textual input as well as from EEG features. We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal. Moreover, for a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines. For more complex tasks such as relation detection, only the contextualized BERT embeddings outperform the baselines in our experiments, which raises the need for further research. Finally, EEG data shows to be particularly promising when limited training data is available.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Multi-modal cognitive maps for language and vision based on neural successor representations
    Stoewer, Paul
    Schilling, Achim
    Ramezani, Pegah
    Kissane, Hassane
    Maier, Andreas
    Krauss, Patrick
    NEUROCOMPUTING, 2025, 631
  • [32] Multi-Modal Interaction Graph Convolutional Network for Temporal Language Localization in Videos
    Zhang, Zongmeng
    Han, Xianjing
    Song, Xuemeng
    Yan, Yan
    Nie, Liqiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 8265 - 8277
  • [33] Seeing helps hearing: A multi-modal dataset and a mamba-based dual branch parallel network for auditory attention decoding
    Fan, Cunhang
    Zhang, Hongyu
    Ni, Qinke
    Zhang, Jingjing
    Tao, Jianhua
    Zhou, Jian
    Yi, Jiangyan
    Lv, Zhao
    Wu, Xiaopei
    INFORMATION FUSION, 2025, 118
  • [34] Recognition of camellia oleifera fruits in natural environment using multi-modal images
    Zhou H.
    Jin S.
    Zhou L.
    Guo Z.
    Sun M.
    Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering, 2023, 39 (10): : 175 - 182
  • [35] A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia
    Ieracitano, Cosimo
    Mammone, Nadia
    Hussain, Amir
    Morabito, Francesco C.
    NEURAL NETWORKS, 2020, 123 : 176 - 190
  • [36] A Multi-modal Clinical Dataset for Critically-Ill and Premature Infant Monitoring: EEG and Videos
    Zeng, Yongshen
    Song, Xiaoyan
    Chen, Hongwu
    Huang, Weimin
    Wang, Wenjin
    2022 IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS (BHI) JOINTLY ORGANISED WITH THE IEEE-EMBS INTERNATIONAL CONFERENCE ON WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS (BSN'22), 2022,
  • [37] Exploiting Partial Common Information Microstructure for Multi-modal Brain Tumor Segmentation
    Mei, Yongsheng
    Venkataramani, Guru
    Lan, Tian
    MACHINE LEARNING FOR MULTIMODAL HEALTHCARE DATA, ML4MHD 2023, 2024, 14315 : 64 - 85
  • [38] Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
    Ghoniem, Rania M.
    Algarni, Abeer D.
    Shaalan, Khaled
    INFORMATION, 2019, 10 (07)
  • [39] Study on the Multi-Modal Data Preprocessing for Knowledge-converged Super Brain
    Oh, Se Won
    Kim, Hyeon Soo
    Lee, Ho Sung
    Kim, Sun Jin
    Park, Hongkyu
    You, Woongshik
    2016 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC 2016): TOWARDS SMARTER HYPER-CONNECTED WORLD, 2016, : 1088 - 1093
  • [40] Multi-modal and multi-model interrogation of large-scale functional brain networks
    Castaldo, Francesca
    dos Santos, Francisco Pascoa
    Timms, Ryan C.
    Cabral, Joana
    Vohryzek, Jakub
    Deco, Gustavo
    Woolrich, Mark
    Friston, Karl
    Verschure, Paul
    Litvak, Vladimir
    NEUROIMAGE, 2023, 277