Suppr超能文献

感知语音中的情感:牛津语音(“OxVoc”)声音数据库验证研究中的消极偏差和性别差异。

Sensing emotion in voices: Negativity bias and gender differences in a validation study of the Oxford Vocal ('OxVoc') sounds database.

作者信息

Young Katherine S, Parsons Christine E, LeBeau Richard T, Tabak Benjamin A, Sewart Amy R, Stein Alan, Kringelbach Morten L, Craske Michelle G

机构信息

Department of Psychology, University of California, Los Angeles.

Department of Psychiatry, University of Oxford.

出版信息

Psychol Assess. 2017 Aug;29(8):967-977. doi: 10.1037/pas0000382. Epub 2016 Sep 22.

Abstract

Emotional expressions are an essential element of human interactions. Recent work has increasingly recognized that emotional vocalizations can color and shape interactions between individuals. Here we present data on the psychometric properties of a recently developed database of authentic nonlinguistic emotional vocalizations from human adults and infants (the Oxford Vocal 'OxVoc' Sounds Database; Parsons, Young, Craske, Stein, & Kringelbach, 2014). In a large sample (n = 562), we demonstrate that adults can reliably categorize these sounds (as 'positive,' 'negative,' or 'sounds with no emotion'), and rate valence in these sounds consistently over time. In an extended sample (n = 945, including the initial n = 562), we also investigated a number of individual difference factors in relation to valence ratings of these vocalizations. Results demonstrated small but significant effects of (a) symptoms of depression and anxiety with more negative ratings of adult neutral vocalizations (R2 = .011 and R2 = .008, respectively) and (b) gender differences in perceived valence such that female listeners rated adult neutral vocalizations more positively and infant cry vocalizations more negatively than male listeners (R2 = .021, R2 = .010, respectively). Of note, we did not find evidence of negativity bias among other affective vocalizations or gender differences in perceived valence of adult laughter, adult cries, infant laughter, or infant neutral vocalizations. Together, these findings largely converge with factors previously shown to impact processing of emotional facial expressions, suggesting a modality-independent impact of depression, anxiety, and listener gender, particularly among vocalizations with more ambiguous valence. (PsycINFO Database Record

摘要

情感表达是人际互动的重要元素。最近的研究越来越认识到,情感发声能够影响并塑造个体之间的互动。在此,我们展示了一个关于人类成年人和婴儿真实非语言情感发声的最新数据库(牛津发声“OxVoc”声音数据库;帕森斯、杨、克拉斯克、斯坦因和克林格尔巴赫,2014年)的心理测量属性数据。在一个大样本(n = 562)中,我们证明成年人能够可靠地对这些声音进行分类(分为“积极”、“消极”或“无情感声音”),并且随着时间的推移对这些声音的效价进行一致的评分。在一个扩展样本(n = 945,包括最初的n = 562)中,我们还研究了一些与这些发声效价评分相关的个体差异因素。结果表明:(a)抑郁和焦虑症状对成年人中性发声的负面评分有微小但显著的影响(分别为R2 = 0.011和R2 = 0.008);(b)在感知效价方面存在性别差异,女性听众对成年人中性发声的评分比男性听众更积极,而对婴儿哭声发声的评分比男性听众更消极(分别为R2 = 0.021和R2 = 0.010)。值得注意的是,我们没有发现其他情感发声中存在消极偏差的证据,也没有发现成年人笑声、成年人哭声、婴儿笑声或婴儿中性发声在感知效价方面存在性别差异。总之,这些发现与先前显示影响情感面部表情加工的因素在很大程度上一致,表明抑郁、焦虑和听众性别具有不依赖于模态的影响,特别是在效价更模糊的发声中。(PsycINFO数据库记录)

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a3b/5546386/57724bf22cfa/pas_29_8_967_fig1a.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验