Suppr超能文献

运动和强度在聋童识别真实人类情感面部表情中的作用。

The role of motion and intensity in deaf children's recognition of real human facial expressions of emotion.

作者信息

Jones Anna C, Gutierrez Roberto, Ludlow Amanda K

机构信息

a Deafness, Cognition and Language Research Centre , University College London , London , UK.

b Department of Psychology , University of Hertfordshire , Hatfield , UK.

出版信息

Cogn Emot. 2018 Feb;32(1):102-115. doi: 10.1080/02699931.2017.1289894. Epub 2017 Feb 14.

Abstract

There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children's emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children's emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.

摘要

有大量证据表明,耳聋与情绪理解方面的延迟有关,这归因于语言习得延迟和交流机会。然而,关于识别面部表情情绪能力的研究结果却模棱两可。这里呈现的两个实验试图通过考虑两个方面来阐明聋儿的情绪识别:运动的作用和强度在聋儿情绪识别中的作用。在研究1中,将26名聋儿与26名年龄匹配的听力正常儿童在一项涉及6种情绪的静态和动态表情的计算机面部情绪识别任务中进行比较。另外,18名聋儿和18名年龄匹配的听力正常儿童参与了研究2,该研究涉及以不同强度呈现相同的6种情绪。研究1表明,聋儿在动态条件下的情绪识别比静态条件下更好,而听力正常儿童在两种条件下的表现没有差异。在研究2中,聋儿的表现与听力正常儿童没有不同,随着强度增加识别率提高。除了厌恶之外,未发现个体情绪方面的差异。这些发现凸显了使用生态有效刺激来评估情绪识别的重要性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验