• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

日常阅读情感数据库(REED):一组音乐和语言中情感的视听记录。

The Reading Everyday Emotion Database (REED): a set of audio-visual recordings of emotions in music and language.

作者信息

Ong Jia Hoong, Leung Florence Yik Nam, Liu Fang

机构信息

School of Psychology and Clinical Language Sciences, University of Reading, Harry Pitt Building, Earley Gate, Reading, RG6 6AL UK.

Department of Psychology, School of Social Sciences, Nottingham Trent University, Nottingham, UK.

出版信息

Lang Resour Eval. 2025;59(1):27-49. doi: 10.1007/s10579-023-09698-5. Epub 2023 Nov 20.

DOI:10.1007/s10579-023-09698-5
PMID:40109557
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11913894/
Abstract

UNLABELLED

Most audio-visual (AV) emotion databases consist of clips that do not reflect real-life emotion processing (e.g., professional actors in bright studio-like environment), contain only spoken clips, and none have sung clips that express complex emotions. Here, we introduce a new AV database, the Reading Everyday Emotion Database (REED), which directly addresses those gaps. We recorded the faces of everyday adults with a diverse range of acting experience expressing 13 emotions-neutral, the six basic emotions (angry, disgusted, fearful, happy, sad, surprised), and six complex emotions (embarrassed, hopeful, jealous, proud, sarcastic, stressed)-in two auditory domains (spoken and sung) using everyday recording devices (e.g., laptops, mobile phones, etc.). The recordings were validated by an independent group of raters. We found that: intensity ratings of the recordings were positively associated with recognition accuracy; and the basic emotions, as well as the Neutral and Sarcastic emotions, were recognised more accurately than the other complex emotions. Emotion recognition accuracy also differed by utterance. Exploratory analysis revealed that recordings of those with drama experience were better recognised than those without. Overall, this database will benefit those who need AV clips with natural variations in both emotion expressions and recording environment.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s10579-023-09698-5.

摘要

未标注

大多数视听(AV)情感数据库由不能反映现实生活中情感处理的片段组成(例如,在类似明亮演播室环境中的专业演员),仅包含口语片段,且没有一个包含表达复杂情感的歌唱片段。在此,我们引入了一个新的视听数据库——日常阅读情感数据库(REED),它直接填补了这些空白。我们使用日常录音设备(如笔记本电脑、手机等),在两个听觉领域(口语和歌唱)记录了具有不同表演经验的日常成年人表达13种情感——中性、六种基本情感(愤怒、厌恶、恐惧、快乐、悲伤、惊讶)和六种复杂情感(尴尬、希望、嫉妒、骄傲、讽刺、压力)时的面部表情。这些记录由一组独立的评分者进行了验证。我们发现:记录的强度评分与识别准确率呈正相关;基本情感以及中性和讽刺情感比其他复杂情感识别得更准确。情感识别准确率也因话语而异。探索性分析表明,有戏剧经验者的记录比没有戏剧经验者的记录识别效果更好。总体而言,该数据库将使那些需要在情感表达和录音环境方面具有自然变化的视听片段的人受益。

补充信息

在线版本包含可在10.1007/s10579-023-09698-5获取的补充材料。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/dd3537b6f908/10579_2023_9698_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/402291e407cc/10579_2023_9698_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/ff4b817ae52d/10579_2023_9698_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/12cd5173715b/10579_2023_9698_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/943fd5d0a960/10579_2023_9698_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/dd3537b6f908/10579_2023_9698_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/402291e407cc/10579_2023_9698_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/ff4b817ae52d/10579_2023_9698_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/12cd5173715b/10579_2023_9698_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/943fd5d0a960/10579_2023_9698_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8461/11913894/dd3537b6f908/10579_2023_9698_Fig5_HTML.jpg

相似文献

1
The Reading Everyday Emotion Database (REED): a set of audio-visual recordings of emotions in music and language.日常阅读情感数据库(REED):一组音乐和语言中情感的视听记录。
Lang Resour Eval. 2025;59(1):27-49. doi: 10.1007/s10579-023-09698-5. Epub 2023 Nov 20.
2
The Complex Emotion Expression Database: A validated stimulus set of trained actors.复杂情感表达数据库:经过训练的演员的有效刺激集。
PLoS One. 2020 Feb 3;15(2):e0228248. doi: 10.1371/journal.pone.0228248. eCollection 2020.
3
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.瑞尔森情感语音和歌曲音频视频数据库(RAVDESS):一组具有北美英语特色的动态、多模态面部和声音表情数据集。
PLoS One. 2018 May 16;13(5):e0196391. doi: 10.1371/journal.pone.0196391. eCollection 2018.
4
The Mandarin Chinese auditory emotions stimulus database: A validated corpus of monosyllabic Chinese characters.汉语听觉情绪刺激数据库:一个经过验证的单音节汉字语料库。
Behav Res Methods. 2025 Feb 3;57(3):89. doi: 10.3758/s13428-025-02607-4.
5
The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.《耶拿情感伪语音变声视听刺激库(JAVMEPS)》:一个包含情感纯听觉、纯视觉以及与声音强度变化的语音和动态面部相匹配和不匹配的视听声音刺激的数据库。
Behav Res Methods. 2024 Aug;56(5):5103-5115. doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.
6
Evidence for shared deficits in identifying emotions from faces and from voices in autism spectrum disorders and specific language impairment.自闭症谱系障碍和特定语言障碍患者在通过面部和声音识别情绪方面存在共同缺陷的证据。
Int J Lang Commun Disord. 2015 Jul;50(4):452-66. doi: 10.1111/1460-6984.12146. Epub 2015 Jan 14.
7
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset.CREMA-D:众包情感多模态演员数据集。
IEEE Trans Affect Comput. 2014 Oct-Dec;5(4):377-390. doi: 10.1109/TAFFC.2014.2336244.
8
BanglaSER: A speech emotion recognition dataset for the Bangla language.孟加拉语SER:一个用于孟加拉语的语音情感识别数据集。
Data Brief. 2022 Mar 22;42:108091. doi: 10.1016/j.dib.2022.108091. eCollection 2022 Jun.
9
A Cantonese Audio-Visual Emotional Speech (CAVES) dataset.一个粤语视听情感语音(CAVES)数据集。
Behav Res Methods. 2024 Aug;56(5):5264-5278. doi: 10.3758/s13428-023-02270-7. Epub 2023 Nov 28.
10
Detection of Emotion of Speech for RAVDESS Audio Using Hybrid Convolution Neural Network.使用混合卷积神经网络检测 RAVDESS 音频的语音情感。
J Healthc Eng. 2022 Feb 27;2022:8472947. doi: 10.1155/2022/8472947. eCollection 2022.

本文引用的文献

1
The Complex Emotion Expression Database: A validated stimulus set of trained actors.复杂情感表达数据库:经过训练的演员的有效刺激集。
PLoS One. 2020 Feb 3;15(2):e0228248. doi: 10.1371/journal.pone.0228248. eCollection 2020.
2
The time course of emotion recognition in speech and music.言语和音乐中的情绪识别时间进程。
J Acoust Soc Am. 2019 May;145(5):3058. doi: 10.1121/1.5108601.
3
Gorilla in our midst: An online behavioral experiment builder.潜伏在我们中间的大猩猩:一个在线行为实验构建器。
Behav Res Methods. 2020 Feb;52(1):388-407. doi: 10.3758/s13428-019-01237-x.
4
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.瑞尔森情感语音和歌曲音频视频数据库(RAVDESS):一组具有北美英语特色的动态、多模态面部和声音表情数据集。
PLoS One. 2018 May 16;13(5):e0196391. doi: 10.1371/journal.pone.0196391. eCollection 2018.
5
The EU-Emotion Voice Database.欧盟情感语音数据库。
Behav Res Methods. 2019 Apr;51(2):493-506. doi: 10.3758/s13428-018-1048-1.
6
Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.阿姆斯特丹动态面部表情集 - 巴斯强度变化版(ADFES - BIV)的验证:一组表达低、中、高强度情绪的视频
PLoS One. 2016 Jan 19;11(1):e0147112. doi: 10.1371/journal.pone.0147112. eCollection 2016.
7
The EU-Emotion Stimulus Set: A validation study.欧盟情绪刺激集:一项验证研究。
Behav Res Methods. 2016 Jun;48(2):567-76. doi: 10.3758/s13428-015-0601-4.
8
Effect of Acting Experience on Emotion Expression and Recognition in Voice: Non-Actors Provide Better Stimuli than Expected.表演经验对语音情感表达与识别的影响:非演员提供的刺激比预期更好。
J Nonverbal Behav. 2015;39(3):195-214. doi: 10.1007/s10919-015-0209-5.
9
Common cues to emotion in the dynamic facial expressions of speech and song.言语和歌曲动态面部表情中常见的情感线索。
Q J Exp Psychol (Hove). 2015;68(5):952-70. doi: 10.1080/17470218.2014.971034. Epub 2014 Nov 25.
10
Random effects structure for confirmatory hypothesis testing: Keep it maximal.用于验证性假设检验的随机效应结构:保持其最大化。
J Mem Lang. 2013 Apr;68(3). doi: 10.1016/j.jml.2012.11.001.