Suppr超能文献

视听情感感知中的统一性假设

Unity Assumption in Audiovisual Emotion Perception.

作者信息

Sou Ka Lon, Say Ashley, Xu Hong

机构信息

Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore.

Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore, Singapore.

出版信息

Front Neurosci. 2022 Mar 4;16:782318. doi: 10.3389/fnins.2022.782318. eCollection 2022.

Abstract

We experience various sensory stimuli every day. How does this integration occur? What are the inherent mechanisms in this integration? The "unity assumption" proposes a perceiver's belief of unity in individual unisensory information to modulate the degree of multisensory integration. However, this has yet to be verified or quantified in the context of semantic emotion integration. In the present study, we investigate the ability of subjects to judge the intensities and degrees of similarity in faces and voices of two emotions (angry and happy). We found more similar stimulus intensities to be associated with stronger likelihoods of the face and voice being integrated. More interestingly, multisensory integration in emotion perception was observed to follow a Gaussian distribution as a function of the emotion intensity difference between the face and voice-the optimal cut-off at about 2.50 points difference on a 7-point Likert scale. This provides a quantitative estimation of the multisensory integration function in audio-visual semantic emotion perception with regards to stimulus intensity. Moreover, to investigate the variation of multisensory integration across the population, we examined the effects of personality and autistic traits of participants. Here, we found no correlation of autistic traits with unisensory processing in a nonclinical population. Our findings shed light on the current understanding of multisensory integration mechanisms.

摘要

我们每天都会经历各种感官刺激。这种整合是如何发生的?这种整合的内在机制是什么?“统一假设”提出,个体单感官信息中的感知者统一信念会调节多感官整合的程度。然而,在语义情感整合的背景下,这一点尚未得到验证或量化。在本研究中,我们调查了受试者判断两种情绪(愤怒和快乐)的面部和声音强度及相似程度的能力。我们发现,更相似的刺激强度与面部和声音被整合的更强可能性相关。更有趣的是,观察到情绪感知中的多感官整合遵循高斯分布,该分布是面部和声音之间情绪强度差异的函数——在7点李克特量表上,最佳截止点约为2.50分的差异。这为视听语义情感感知中多感官整合功能关于刺激强度提供了定量估计。此外,为了研究多感官整合在人群中的变化,我们考察了参与者的个性和自闭症特征的影响。在这里,我们发现在非临床人群中,自闭症特征与单感官加工没有相关性。我们的研究结果为当前对多感官整合机制的理解提供了启示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0a/8931414/ffb1eb768ebe/fnins-16-782318-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验