EEG-based classification of natural sounds reveals specialized responses to speech and music

被引:21
|
作者
Zuk, Nathaniel J. [1 ,2 ]
Teoh, Emily S. [1 ,2 ,3 ]
Lalor, Edmund C. [1 ,2 ,3 ,4 ,5 ,6 ]
机构
[1] Trinity Coll Dublin, Dept Elect & Elect Engn, Dublin 2, Ireland
[2] Trinity Coll Dublin, Trinity Coll Inst Neurosci, Dublin 2, Ireland
[3] Trinity Coll Dublin, Trinity Ctr Biomed Engn, Dublin 2, Ireland
[4] Univ Rochester, Dept Biomed Engn, Rochester, NY 14627 USA
[5] Univ Rochester, Med Ctr, Dept Neurosci, Rochester, NY 14627 USA
[6] Univ Rochester, Del Monte Inst Neurosci, Med Ctr, Rochester, NY 14627 USA
关键词
EEG; Natural sounds; Biophysical model; Classification analysis; Speech; Music; HUMAN AUDITORY-CORTEX; DISTINCT CORTICAL PATHWAYS; ACOUSTIC FEATURES; NEURAL RESPONSES; REPRESENTATIONS; STATISTICS; AMPLITUDE; PATTERNS; LANGUAGE;
D O I
10.1016/j.neuroimage.2020.116558
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music compared to other types of sounds, as has been found for responses to species-specific sounds in other animals. We recorded EEG as healthy, adult subjects listened to various types of two-second-long natural sounds. By classifying each sound based on the EEG response, we found that speech, music, and impact sounds were classified better than other natural sounds. But unlike impact sounds, the classification accuracy for speech and music dropped for synthesized sounds that have identical frequency and modulation statistics based on a subcortical model, indicating a selectivity for higher-order features in these sounds. Lastly, the patterns in average power and phase consistency of the two-second EEG responses to each sound replicated the patterns of speech and music selectivity observed with classification accuracy. Together with the classification results, this suggests that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds. In addition to highlighting the importance of speech and music for the human brain, the techniques used here could be a cost-effective, temporally precise, and efficient way to study the human brain's selectivity for speech and music in other populations.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] EEG-Based Classification of Music Appraisal Responses Using Time-Frequency Analysis and Familiarity Ratings
    Hadjidimitriou, Stelios K.
    Hadjileontiadis, Leontios J.
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2013, 4 (02) : 161 - 172
  • [2] Systematic Review of EEG-Based Imagined Speech Classification Methods
    Alzahrani, Salwa
    Banjar, Haneen
    Mirza, Rsha
    Sensors, 2024, 24 (24)
  • [3] EEG-Based Emotion Recognition in Music Listening
    Lin, Yuan-Pin
    Wang, Chi-Hong
    Jung, Tzyy-Ping
    Wu, Tien-Lin
    Jeng, Shyh-Kang
    Duann, Jeng-Ren
    Chen, Jyh-Horng
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2010, 57 (07) : 1798 - 1806
  • [4] EEG-based Music Mood Analysis and Applications
    Xu, Xin
    Guan, Hui
    Liu, Zhen
    Wang, Bojun
    ADVANCES IN MANUFACTURING SCIENCE AND ENGINEERING, PTS 1-4, 2013, 712-715 : 2726 - +
  • [5] 'Are you even listening?' - EEG-based decoding of absolute auditory attention to natural speech
    Roebben, Arnout
    Heintz, Nicolas
    Geirnaert, Simon
    Francart, Tom
    Bertrand, Alexander
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (03)
  • [6] A Specialized Interactive Data Application for EEG-Based Sleep Studies
    Panagopoulos, George
    Palmer, Cara A.
    10TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS (PETRA 2017), 2017, : 399 - 402
  • [7] Attention with kernels for EEG-based emotion classification
    Kuang, Dongyang
    Michoski, Craig
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (10) : 5251 - 5266
  • [8] Attention with kernels for EEG-based emotion classification
    Dongyang Kuang
    Craig Michoski
    Neural Computing and Applications, 2024, 36 : 5251 - 5266
  • [9] Classification of EEG-based Emotion for BCI Applications
    Mohammadpour, Mostafa
    Hashemi, Seyyed Mohammad Reza
    Houshmand, Negin
    2017 ARTIFICIAL INTELLIGENCE AND ROBOTICS (IRANOPEN), 2017, : 127 - 131
  • [10] Transfer learning in imagined speech EEG-based BCIs
    Garcia-Salinas, Jesus S.
    Villasenor-Pineda, Luis
    Reyes-Garcia, Carlos A.
    Torres-Garcia, Alejandro A.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2019, 50 : 151 - 157