Suppr超能文献

聋人和听力正常个体对情绪面部表情的识别。

The recognition of facial expressions of emotion in deaf and hearing individuals.

作者信息

Rodger Helen, Lao Junpeng, Stoll Chloé, Richoz Anne-Raphaëlle, Pascalis Olivier, Dye Matthew, Caldara Roberto

机构信息

Department of Psychology, University of Fribourg, Fribourg, Switzerland.

Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes, France.

出版信息

Heliyon. 2021 May 15;7(5):e07018. doi: 10.1016/j.heliyon.2021.e07018. eCollection 2021 May.

Abstract

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the , that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

摘要

在现实生活中的互动中,情绪的面部表情是通过多模态感官信息动态感知的。在没有听觉感官通道输入的情况下,尚不清楚聋人如何识别面部表情并在内部进行表征。很少有研究使用动态刺激来调查聋人手语使用者对面部表情的识别,而且没有一项研究包括所有六种基本情绪面部表情(愤怒、厌恶、恐惧、快乐、悲伤和惊讶),且刺激的低水平视觉属性得到完全控制,这使得聋人观察者是否存在动态优势的问题仍未得到解决。我们根据[相关理论]假设,听觉感官信息的缺失可能迫使视觉系统更好地处理视觉(单模态)信号,并预测这种对视觉刺激的更高敏感性将导致与静态刺激相比,动态刺激的识别性能更好,并且在动态条件下,聋人手语使用者比听力正常的非手语使用者的识别性能更好。为此,我们对早发性重度至极重度耳聋(听力损失>70分贝)的聋人手语使用者和听力正常的对照组进行了一系列心理物理学研究,以评估他们识别六种基本情绪面部表情的能力。使用静态、动态和打乱(表情的视频帧随机排列)刺激,我们发现与听力正常的对照组相比,聋人观察者在表情分类特征和混淆情况上相似(例如,将惊讶与恐惧混淆)。与我们的假设相反,我们发现聋人观察者在动态面部表情与静态面部表情的识别上没有优势。这一观察结果表明,即使在聋人的视觉系统中,动态面部表情情感信号的解码也不具有优势,这表明在情绪静态面部表情的顶点存在最佳信号。聋人在识别情绪面部表情方面与听力正常的人相当。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e442/8141778/b86822862168/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验