Decoding of selective attention to continuous speech from the human auditory brainstem response

被引:44
|
作者
Etard, Octave [1 ,2 ]
Kegler, Mikolaj [1 ,2 ]
Braiman, Chananel [3 ]
Forte, Antonio Elia [1 ,2 ,4 ]
Reichenbach, Tobias [1 ,2 ]
机构
[1] Imperial Coll London, Dept Bioengn, South Kensington Campus, London SW7 2AZ, England
[2] Imperial Coll London, Ctr Neurotechnol, South Kensington Campus, London SW7 2AZ, England
[3] Weill Cornell Med Coll, Triinst Training Program Computat Biol & Med, New York, NY 10065 USA
[4] Harvard Univ, John A Paulson Sch Engn & Appl Sci, Cambridge, MA 02138 USA
基金
英国惠康基金; 英国工程与自然科学研究理事会; 美国国家科学基金会;
关键词
Complex auditory brainstem response; Natural speech; Auditory attention decoding; COCKTAIL PARTY; COMPUTER-INTERFACE; EEG; NOISE; MEG;
D O I
10.1016/j.neuroimage.2019.06.029
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multichannel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 50 条
  • [1] The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention
    Forte, Antonio Elia
    Etard, Octave
    Reichenbach, Tobias
    ELIFE, 2017, 6
  • [2] Effect of selective attention on auditory brainstem response
    Kumar, Sathish
    Nayak, Srikanth
    Muthu, Arivudai Nambi Pitchai
    HEARING BALANCE AND COMMUNICATION, 2023, 21 (02) : 139 - 147
  • [3] Computational modeling of the auditory brainstem response to continuous speech
    Saiz-Alia, Marina
    Reichenbach, Tobias
    JOURNAL OF NEURAL ENGINEERING, 2020, 17 (03)
  • [4] Auditory Brainstem Responses to Continuous Natural Speech in Human Listeners
    Maddox, Ross K.
    Lee, Adrian K. C.
    ENEURO, 2018, 5 (01)
  • [5] Emotion and the auditory brainstem response to speech
    Wang, Jade Q.
    Nicol, Trent
    Skoe, Erika
    Sams, Mikko
    Kraus, Nina
    NEUROSCIENCE LETTERS, 2010, 469 (03) : 319 - 323
  • [6] Speech Evoked Auditory Brainstem Response in Stuttering
    Tahaei, Ali Akbar
    Ashayeri, Hassan
    Pourbakht, Akram
    Kamali, Mohammad
    SCIENTIFICA, 2014, 2014
  • [7] Exposing distinct subcortical components of the auditory brainstem response evoked by continuous naturalistic speech
    Polonenko, Melissa J.
    Maddox, Ross K.
    ELIFE, 2021, 10 : 1 - 67
  • [8] The Auditory-Brainstem Response to Continuous, Non-repetitive Speech Is Modulated by the Speech Envelope and Reflects Speech Processing
    Reichenbach, Chagit S.
    Braiman, Chananel
    Schiff, Nicholas D.
    Hudspeth, A. J.
    Reichenbach, Tobias
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2016, 10
  • [9] Characteristics of Speech Auditory Brainstem Response in Preschool Children With Attention-Deficit/Hyperactivity Disorder
    Sun, Yuying
    Zhou, Jia
    Zhu, Huiqin
    Liu, Panting
    Lin, Huanxi
    Xiao, Zhenglu
    Yu, Xinyue
    Qian, Jun
    Tong, Meiling
    Chi, Xia
    Hong, Qin
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2024, 67 (09): : 3163 - 3177
  • [10] Neural speech tracking and auditory attention decoding in everyday life
    Straetmans, Lisa
    Adiloglu, Kamil
    Debener, Stefan
    FRONTIERS IN HUMAN NEUROSCIENCE, 2024, 18