Decoding of selective attention to continuous speech from the human auditory brainstem response

被引:44
作者
Etard, Octave [1 ,2 ]
Kegler, Mikolaj [1 ,2 ]
Braiman, Chananel [3 ]
Forte, Antonio Elia [1 ,2 ,4 ]
Reichenbach, Tobias [1 ,2 ]
机构
[1] Imperial Coll London, Dept Bioengn, South Kensington Campus, London SW7 2AZ, England
[2] Imperial Coll London, Ctr Neurotechnol, South Kensington Campus, London SW7 2AZ, England
[3] Weill Cornell Med Coll, Triinst Training Program Computat Biol & Med, New York, NY 10065 USA
[4] Harvard Univ, John A Paulson Sch Engn & Appl Sci, Cambridge, MA 02138 USA
基金
英国工程与自然科学研究理事会; 英国惠康基金; 美国国家科学基金会;
关键词
Complex auditory brainstem response; Natural speech; Auditory attention decoding; COCKTAIL PARTY; COMPUTER-INTERFACE; EEG; NOISE; MEG;
D O I
10.1016/j.neuroimage.2019.06.029
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multichannel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 45 条
  • [21] FIELD ANALYSIS OF AUDITORY EVOKED BRAIN-STEM POTENTIALS
    GRANDORI, F
    [J]. HEARING RESEARCH, 1986, 21 (01) : 51 - 58
  • [22] Mass univariate analysis of event-related brain potentials/fields I: A critical tutorial review
    Groppe, David M.
    Urbach, Thomas P.
    Kutas, Marta
    [J]. PSYCHOPHYSIOLOGY, 2011, 48 (12) : 1711 - 1725
  • [23] Hastie, 2009, ELEMENTS STAT LEARNI
  • [24] On the interpretation of weight vectors of linear models in multivariate neuroimaging
    Haufe, Stefan
    Meinecke, Frank
    Goergen, Kai
    Daehne, Sven
    Haynes, John-Dylan
    Blankertz, Benjamin
    Biessgmann, Felix
    [J]. NEUROIMAGE, 2014, 87 : 96 - 110
  • [25] Suppression of competing speech through entrainment of cortical oscillations
    Horton, Cort
    D'Zmura, Michael
    Srinivasan, Ramesh
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 2013, 109 (12) : 3082 - 3093
  • [26] Speech pitch determination based on Hilbert-Huang transform
    Huang, H
    Pan, JQ
    [J]. SIGNAL PROCESSING, 2006, 86 (04) : 792 - 803
  • [27] Kegler M, PYHTON CODE COMPUTAT
  • [28] Attentional Gain Control of Ongoing Cortical Speech Representations in a "Cocktail Party"
    Kerlin, Jess R.
    Shahin, Antoine J.
    Miller, Lee M.
    [J]. JOURNAL OF NEUROSCIENCE, 2010, 30 (02) : 620 - 628
  • [29] Benefits of Acoustic Beamforming for Solving the Cocktail Party Problem
    Kidd, Gerald, Jr.
    Mason, Christine R.
    Best, Virginia
    Swaminathan, Jayaganesh
    [J]. TRENDS IN HEARING, 2015, 19
  • [30] Auditory Brainstem Responses to Continuous Natural Speech in Human Listeners
    Maddox, Ross K.
    Lee, Adrian K. C.
    [J]. ENEURO, 2018, 5 (01)