Suppr超能文献

关于大脑如何解码有关说话者自信程度的语音线索。

On how the brain decodes vocal cues about speaker confidence.

作者信息

Jiang Xiaoming, Pell Marc D

机构信息

School of Communication Sciences and Disorders and Center for Research on Brain, Language and Music, McGill University, Montréal, Canada.

出版信息

Cortex. 2015 May;66:9-34. doi: 10.1016/j.cortex.2015.02.002. Epub 2015 Feb 21.

Abstract

In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by revealing how a speaker's mental state (i.e., feeling of knowing) is simultaneously inferred from vocal expressions.

摘要

在言语交流中,听众必须准确解码那些反映说话者心理状态的语音线索,比如他们的自信程度或“知晓感”。然而,与在线推断说话者自信程度相关的时间进程和神经机制尚不清楚。在此,我们使用事件相关电位(ERP)来研究在言语处理过程中,听众从语音线索推断说话者自信程度能力背后的时间神经动力学。我们记录了听众在评估陈述时的实时脑反应,这些陈述中说话者的语气传达了三种自信程度之一(自信、接近自信、不自信),或者是以中性方式说出。与事件开始时间锁定的神经反应表明,在言语处理的不同时间点可以区分出感知到的说话者自信程度:不自信的表达比所有其他自信表达(或中性意图的话语)引发的P2波更弱,而接近自信的表达在330 - 500毫秒和550 - 740毫秒的时间窗口内引发的负反应减少。对于此任务,在980 - 1270毫秒窗口内,同样被视为相对自信的中性意图表达比所有其他表达引发的正电位更延迟、持续时间更长且幅度更大。这些发现首次提供了证据,证明在在线言语理解过程中,大脑对表示说话者自信程度的语音线索的反应有多快;首先,在言语开始后200毫秒就会出现不自信和自信声音之间的大致区分。在后期阶段,通过推理系统进一步评估说话者自信的确切程度(即接近自信、非常自信),以确定当前任务设置下说话者的意图。这些发现扩展了言语理解中语音情感线索处理的三阶段模型(例如,Schirmer & Kotz,2006),揭示了如何同时从语音表达中推断出说话者的心理状态(即知晓感)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验