Suppr超能文献

说话、听和自我听时大脑对语音包络的同步时间。

Timing of brain entrainment to the speech envelope during speaking, listening and self-listening.

机构信息

MRC Cognition and Brain Sciences Unit, University of Cambridge, UK; Department of Language Studies, University of Toronto Scarborough, Canada; Department of Psychology, University of Toronto Scarborough, Canada.

MRC Cognition and Brain Sciences Unit, University of Cambridge, UK.

出版信息

Cognition. 2022 Jul;224:105051. doi: 10.1016/j.cognition.2022.105051. Epub 2022 Feb 24.

Abstract

⁠This study investigates the dynamics of speech envelope tracking during speech production, listening and self-listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorded with EEG. After time-locking EEG data collection and auditory recording and playback, we used a Gaussian copula mutual information measure to estimate the relationship between information content in the EEG and auditory signals. In the 2-10 Hz frequency range, we identified different latencies for maximal speech envelope tracking during speech production and speech perception. Maximal speech tracking takes place approximately 110 ms after auditory presentation during perception and 25 ms before vocalisation during speech production. These results describe a specific timeline for speech tracking in speakers and listeners in line with the idea of a speech chain and hence, delays in communication.

摘要

这项研究调查了言语产生、聆听和自我聆听过程中言语包络追踪的动态。我们使用了一种范式,参与者在该范式中聆听自然言语(聆听)、产生自然言语(言语产生)以及聆听自己言语的回放(自我聆听),同时使用 EEG 记录他们的神经活动。在进行 EEG 数据采集和听觉记录与回放后,我们使用高斯 Copula 互信息测度来估计 EEG 和听觉信号中的信息含量之间的关系。在 2-10 Hz 的频率范围内,我们确定了在言语产生和言语感知过程中最大言语包络追踪的不同潜伏期。在感知过程中,最大的言语追踪发生在听觉呈现后约 110 毫秒,而在言语产生过程中,最大的言语追踪发生在发声前 25 毫秒。这些结果描述了说话者和聆听者在言语追踪方面的特定时间线,与言语链的概念一致,因此也与沟通延迟有关。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a2c3/9112165/6686741ef324/gr1.jpg

相似文献

1
Timing of brain entrainment to the speech envelope during speaking, listening and self-listening.
Cognition. 2022 Jul;224:105051. doi: 10.1016/j.cognition.2022.105051. Epub 2022 Feb 24.
3
θ-Band Cortical Tracking of the Speech Envelope Shows the Linear Phase Property.
eNeuro. 2021 Aug 25;8(4). doi: 10.1523/ENEURO.0058-21.2021. Print 2021 Jul-Aug.
4
Listening effort during speech perception enhances auditory and lexical processing for non-native listeners and accents.
Cognition. 2018 Oct;179:163-170. doi: 10.1016/j.cognition.2018.06.001. Epub 2018 Jun 26.
5
Neocortical activity tracks the hierarchical linguistic structures of self-produced speech during reading aloud.
Neuroimage. 2020 Aug 1;216:116788. doi: 10.1016/j.neuroimage.2020.116788. Epub 2020 Apr 26.
6
The Right Temporoparietal Junction Supports Speech Tracking During Selective Listening: Evidence from Concurrent EEG-fMRI.
J Neurosci. 2017 Nov 22;37(47):11505-11516. doi: 10.1523/JNEUROSCI.1007-17.2017. Epub 2017 Oct 23.
8
Beyond linear neural envelope tracking: a mutual information approach.
J Neural Eng. 2023 Mar 9;20(2). doi: 10.1088/1741-2552/acbe1d.

引用本文的文献

1
Neural tracking of natural speech: an effective marker for post-stroke aphasia.
Brain Commun. 2025 Mar 10;7(2):fcaf095. doi: 10.1093/braincomms/fcaf095. eCollection 2025.
2
Opposing neural processing modes alternate rhythmically during sustained auditory attention.
Commun Biol. 2024 Sep 12;7(1):1125. doi: 10.1038/s42003-024-06834-x.
3
Speech-induced suppression during natural dialogues.
Commun Biol. 2024 Mar 8;7(1):291. doi: 10.1038/s42003-024-05945-9.

本文引用的文献

1
Comparison of undirected frequency-domain connectivity measures for cerebro-peripheral analysis.
Neuroimage. 2021 Dec 15;245:118660. doi: 10.1016/j.neuroimage.2021.118660. Epub 2021 Oct 29.
2
Joint recording of EEG and audio signals in hyperscanning and pseudo-hyperscanning experiments.
MethodsX. 2021 Apr 20;8:101347. doi: 10.1016/j.mex.2021.101347. eCollection 2021.
3
Speech-Induced Suppression for Delayed Auditory Feedback in Adults Who Do and Do Not Stutter.
Front Hum Neurosci. 2020 Apr 24;14:150. doi: 10.3389/fnhum.2020.00150. eCollection 2020.
4
Speech rhythms and their neural foundations.
Nat Rev Neurosci. 2020 Jun;21(6):322-334. doi: 10.1038/s41583-020-0304-4. Epub 2020 May 6.
5
The interplay of top-down focal attention and the cortical tracking of speech.
Sci Rep. 2020 Apr 24;10(1):6922. doi: 10.1038/s41598-020-63587-3.
6
Can EEG and MEG detect signals from the human cerebellum?
Neuroimage. 2020 Jul 15;215:116817. doi: 10.1016/j.neuroimage.2020.116817. Epub 2020 Apr 8.
7
Semantic Context Enhances the Early Auditory Encoding of Natural Speech.
J Neurosci. 2019 Sep 18;39(38):7564-7575. doi: 10.1523/JNEUROSCI.0584-19.2019. Epub 2019 Aug 1.
8
Auditory-Articulatory Neural Alignment between Listener and Speaker during Verbal Communication.
Cereb Cortex. 2020 Mar 14;30(3):942-951. doi: 10.1093/cercor/bhz138.
9
Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning.
Nat Neurosci. 2019 Apr;22(4):627-632. doi: 10.1038/s41593-019-0353-z. Epub 2019 Mar 4.
10
PsychoPy2: Experiments in behavior made easy.
Behav Res Methods. 2019 Feb;51(1):195-203. doi: 10.3758/s13428-018-01193-y.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验