Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

被引:16
|
作者
Hollenstein, Nora [1 ]
Renggli, Cedric [2 ]
Glaus, Benjamin [2 ]
Barrett, Maria [3 ]
Troendle, Marius [4 ]
Langer, Nicolas [4 ]
Zhang, Ce [2 ]
机构
[1] Univ Copenhagen, Dept Nord Studies & Linguist, Copenhagen, Denmark
[2] Swiss Fed Inst Technol, Swiss Fed Inst Technol, Dept Comp Sci, Zurich, Switzerland
[3] IT Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
[4] Univ Zurich, Dept Psychol, Zurich, Switzerland
来源
FRONTIERS IN HUMAN NEUROSCIENCE | 2021年 / 15卷
关键词
EEG; natural language processing; frequency bands; brain activity; machine learning; multi-modal learning; physiological data; neural network; REGRESSION-BASED ESTIMATION; COGNITIVE NEUROSCIENCE; EYE-MOVEMENTS; THETA; SPEECH; NEUROBIOLOGY; OSCILLATIONS; RESPONSES; MODELS;
D O I
10.3389/fnhum.2021.659410
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity for this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial. We present a multi-modal machine learning architecture that learns jointly from textual input as well as from EEG features. We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal. Moreover, for a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines. For more complex tasks such as relation detection, only the contextualized BERT embeddings outperform the baselines in our experiments, which raises the need for further research. Finally, EEG data shows to be particularly promising when limited training data is available.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Multi-modal Natural Language Processing for Stock Price Prediction
    Taylor, Kevin
    Ng, Jerry
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2024, 2024, 1068 : 409 - 419
  • [2] Multi-Modal Integration of EEG-fNIRS for Characterization of Brain Activity Evoked by Preferred Music
    Qiu, Lina
    Zhong, Yongshi
    Xie, Qiuyou
    He, Zhipeng
    Wang, Xiaoyun
    Chen, Yingyue
    Zhan, Chang'an A.
    Pan, Jiahui
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [3] Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks
    Borra, Davide
    Fraternali, Matteo
    Ravanelli, Mirco
    Magosso, Elisa
    ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, ANNPR 2024, 2024, 15154 : 168 - 179
  • [4] Simultaneous Scalp Electroencephalography (EEG), Electromyography (EMG), and Whole-body Segmental Inertial Recording for Multi-modal Neural Decoding
    Bulea, Thomas C.
    Kilicarslan, Atilla
    Ozdemir, Recep
    Paloski, William H.
    Contreras-Vidal, Jose L.
    JOVE-JOURNAL OF VISUALIZED EXPERIMENTS, 2013, (77):
  • [5] Confused or not: decoding brain activity and recognizing confusion in reasoning learning using EEG
    Xu, Tao
    Wang, Jiabao
    Zhang, Gaotian
    Zhang, Ling
    Zhou, Yun
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (02)
  • [6] Decoding of multi-modal signals for motor imagery based on window positioning
    Meng, Yinghui
    Su, Yaru
    Li, Duan
    Nan, Jiaofen
    Xia, Yongquan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (03)
  • [7] Neural Decoding of Multi-Modal Imagery Behavior Focusing on Temporal Complexity
    Furutani, Naoki
    Nariya, Yuta
    Takahashi, Tetsuya
    Ito, Haruka
    Yoshimura, Yuko
    Hiraishi, Hirotoshi
    Hasegawa, Chiaki
    Ikeda, Takashi
    Kikuchi, Mitsuru
    FRONTIERS IN PSYCHIATRY, 2020, 11
  • [8] Multi-modal Affect Induction for Affective Brain-Computer Interfaces
    Muhl, Christian
    van den Broek, Egon L.
    Brouwer, Anne-Marie
    Nijboer, Femke
    van Wouwe, Nelleke
    Heylen, Dirk
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PT I, 2011, 6974 : 235 - +
  • [9] Multi-modal analysis of infant cry types characterization: Acoustics, body language and brain signals
    Laguna, Ana
    Pusil, Sandra
    Bazan, Angel
    Zegarra-Valdivia, Jonathan Adrian
    Paltrinieri, Anna Lucia
    Piras, Paolo
    Palomares i Perera, Claudia
    Veglia, Alexandra Pardos
    Garcia-Algar, Oscar
    Orlandi, Silvia
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 167
  • [10] Bayesian analysis of multi-modal data and brain imaging
    Assadi, A
    Eghbalnia, H
    Backonja, M
    Wakai, R
    Rutecki, P
    Haughton, V
    MEDICAL IMAGING 2000: IMAGE PROCESSING, PTS 1 AND 2, 2000, 3979 : 1160 - 1167