Suppr超能文献

从人类听觉脑干反应中解码对连续语音的选择性注意。

Decoding of selective attention to continuous speech from the human auditory brainstem response.

机构信息

Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ, London, UK.

Tri-Institutional Training Program in Computational Biology and Medicine, Weill Cornell Medical College, New York, NY, 10065, USA.

出版信息

Neuroimage. 2019 Oct 15;200:1-11. doi: 10.1016/j.neuroimage.2019.06.029. Epub 2019 Jun 15.

Abstract

Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.

摘要

人类非常擅长分析复杂的声学场景。不同声流的分离和相应神经表示的形成主要归因于听觉皮层。因此,从神经影像学解码选择性注意主要集中在对声音的皮层反应上。然而,正如最近通过测量脑干对连续语音的反应所显示的那样,脑干对语音的选择性注意也会产生调制。虽然脑干的反应幅度比听觉皮层小,但它发生在更高的频率,因此具有更高的信息率。在这里,我们开发了从多通道头皮记录中提取脑干反应的统计模型,并根据注意力焦点来分析注意力调制。我们证明,通过短时间(不超过 10 秒)的测量,从脑干对语音的反应中提取的注意力调制可以用来解码听者的注意力焦点。即使仅从三个 EEG 通道获得,解码仍然准确。我们进一步展示了如何使用无需主体模型和不依赖于特定被关注说话者的开箱即用解码来实现类似的准确性。这些结果为研究脑干中选择性注意的神经机制以及开发高效的听觉脑机接口开辟了新的途径。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验