Suppr超能文献

《耶拿情感伪语音变声视听刺激库(JAVMEPS)》:一个包含情感纯听觉、纯视觉以及与声音强度变化的语音和动态面部相匹配和不匹配的视听声音刺激的数据库。

The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.

机构信息

Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.

Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.

出版信息

Behav Res Methods. 2024 Aug;56(5):5103-5115. doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.

Abstract

We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (M = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.

摘要

我们描述了 JAVMEPS,这是一个视听(AV)数据库,用于具有情感强度变化的情感声音和动态面部刺激。 JAVMEPS 包括 2256 个刺激文件,包括(A)12 位说话者的录音,他们在听觉、视觉和视听条件下,用四种双音节伪词说出六种自然诱发的基本情绪,加上中性。此外,它还包括(B)为 8 位说话者和 2 个伪词的快乐、恐惧、愤怒、悲伤、厌恶和惊讶声音的漫画(140%)、原始声音(100%)和反漫画(60%)。至关重要的是,JAVMEPS 包含(C)具有两个情绪(愤怒、惊讶)的精确时间同步的一致和不一致的 AV(和相应的听觉)刺激,(C1)具有原始强度(10 位说话者,4 个伪词),(C2)和分级的 AV 一致性(通过从漫画到反漫画的五个声音变形级别来实现;8 位说话者,2 个伪词)。我们从 22 位正常听力听众和 4 位人工耳蜗使用者那里收集了对刺激集 A 的分类数据,使用了两个伪词,在听觉、视觉和视听条件下。正常听力个体表现出良好的分类性能(M=.59 到.92),在听觉条件下的分类率≥.38 正确(惊讶:.67,愤怒:.51)。尽管声音情感感知受损,CI 用户对听觉刺激的表现仍超过了.14 的机会水平,对惊讶的表现最好(.31)和愤怒(.30)。我们预计 JAVMEPS 将成为研究听觉情感感知的有用的开放资源,尤其是当需要自适应测试或任务难度校准时。由于其具有时间同步的一致和不一致的刺激,JAVMEPS 还可以为研究通过行为或神经生理记录的情感感知的动态视听整合提供帮助,填补这方面的空白。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/99c3/11289065/65da1a052590/13428_2023_2249_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验