Cognitive and Affective Neuroscience Unit, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8057 Zurich, Switzerland.
Cognitive and Affective Neuroscience Unit, University of Zurich, 8050 Zurich, Switzerland.
Prog Neurobiol. 2022 Jul;214:102278. doi: 10.1016/j.pneurobio.2022.102278. Epub 2022 May 2.
Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners' neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers' live affective speech production with listeners' limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners' limbic signals was closely linked to their emotion recognition as quantified by speakers' acoustics and listeners' emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts.
人类交流中的情感信号涉及皮质-边缘脑系统,用于情感信息解码,例如在情感言语中的发声语调中表达。 之前使用非社交和非适应性研究方案确定了说话者的情感声学语音特征和听众的皮质-边缘情感识别网络。 然而,这些方案忽略了情感交流固有的社会对偶性质,从而低估了最大限度地提高听众神经效应和情感识别的情感言语的实时自适应动态。 为了近似情感交流的这种社会适应和神经背景,我们使用了一种创新的实时神经影像学设置,该设置将说话者的实时情感言语产生与听众的边缘脑信号联系起来,边缘脑信号作为情感识别的代理。 我们表明,在实时自适应环境中,情感言语交流在声学上更具特色、适应性和个性化,并且比非适应性情感言语交流更有效地利用边缘和相关网络中的神经情感解码机制。 只有适应听众边缘信号产生的实时情感言语与他们的情感识别密切相关,这可以通过说话者的声学和听众的情感评分相关性来量化。 此外,虽然直接调节听众边缘活动的实时和适应性攻击性言语,但快乐的言语调节与腹侧纹状体的边缘活动有关,腹侧纹状体除其他外,涉及愉悦感的处理。 因此,用于情感解码的进化神经机制似乎主要针对交互和个体自适应的交际环境进行了优化。