• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

预期情绪强度对情绪语音表达中线索利用及解码准确性的影响。

Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion.

作者信息

Juslin P N, Laukka P

机构信息

Department of Psychology, Uppsala University, Sweden.

出版信息

Emotion. 2001 Dec;1(4):381-412. doi: 10.1037/1528-3542.1.4.381.

DOI:10.1037/1528-3542.1.4.381
PMID:12901399
Abstract

Actors vocally portrayed happiness, sadness, anger, fear, and disgust with weak and strong emotion intensity while reading brief verbal phrases aloud. The portrayals were recorded and analyzed according to 20 acoustic cues. Listeners decoded each portrayal by using forced-choice or quantitative ratings. The results showed that (a) portrayals with strong emotion intensity yielded higher decoding accuracy than portrayals with weak intensity, (b) listeners were able to decode the intensity of portrayals, (c) portrayals of the same emotion with different intensity yielded different patterns of acoustic cues, and (d) certain acoustic cues (e.g., fundamental frequency, high-frequency energy) were highly predictive of listeners' ratings of emotion intensity. It is argued that lack of control for emotion intensity may account for some of the inconsistencies in cue utilization reported in the literature.

摘要

演员在大声朗读简短的文字短语时,用强弱不同的情感强度通过声音来表现快乐、悲伤、愤怒、恐惧和厌恶。这些表现被录制下来,并根据20种声学线索进行分析。听众通过强制选择或定量评分来解读每种表现。结果表明:(a)情感强度强的表现比情感强度弱的表现具有更高的解码准确率;(b)听众能够解读表现的强度;(c)相同情感不同强度的表现产生不同的声学线索模式;(d)某些声学线索(如基频、高频能量)能高度预测听众对情感强度的评分。有人认为,对情感强度缺乏控制可能是文献中报道的线索利用不一致的部分原因。

相似文献

1
Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion.预期情绪强度对情绪语音表达中线索利用及解码准确性的影响。
Emotion. 2001 Dec;1(4):381-412. doi: 10.1037/1528-3542.1.4.381.
2
"Worth a thousand words": absolute and relative decoding of nonlinguistic affect vocalizations.“一图胜千言”:非语言情感发声的绝对与相对解码
Emotion. 2009 Jun;9(3):293-305. doi: 10.1037/a0015178.
3
Acoustic profiles in vocal emotion expression.声乐情感表达中的声学特征。
J Pers Soc Psychol. 1996 Mar;70(3):614-36. doi: 10.1037//0022-3514.70.3.614.
4
Intelligibility of emotional speech in younger and older adults.年轻人和老年人情感言语的可懂度。
Ear Hear. 2014 Nov-Dec;35(6):695-707. doi: 10.1097/AUD.0000000000000082.
5
The importance of vocal affect to bimodal processing of emotion: implications for individuals with traumatic brain injury.语音情感对情绪双模式加工的重要性:对创伤性脑损伤个体的启示。
J Commun Disord. 2009 Jan-Feb;42(1):1-17. doi: 10.1016/j.jcomdis.2008.06.001. Epub 2008 Jul 9.
6
Beyond arousal: valence and potency/control cues in the vocal expression of emotion.超越唤起:情绪的声音表达中的效价和能力/控制线索。
J Acoust Soc Am. 2010 Sep;128(3):1322-36. doi: 10.1121/1.3466853.
7
The contribution of phonation type to the perception of vocal emotions in German: an articulatory synthesis study.发声类型对德语中嗓音情绪感知的影响:一项发音合成研究。
J Acoust Soc Am. 2015 Mar;137(3):1503-12. doi: 10.1121/1.4906836.
8
When voices get emotional: a corpus of nonverbal vocalizations for research on emotion processing.当声音变得情绪化:用于情感处理研究的非言语发声语料库。
Behav Res Methods. 2013 Dec;45(4):1234-45. doi: 10.3758/s13428-013-0324-3.
9
Getting the cue: sensory contributions to auditory emotion recognition impairments in schizophrenia.获得线索:精神分裂症听觉情绪识别障碍的感觉贡献。
Schizophr Bull. 2010 May;36(3):545-56. doi: 10.1093/schbul/sbn115. Epub 2008 Sep 12.
10
Multimodal and Spectral Degradation Effects on Speech and Emotion Recognition in Adult Listeners.多模态和光谱降解对成年听众的言语和情感识别的影响。
Trends Hear. 2018 Jan-Dec;22:2331216518804966. doi: 10.1177/2331216518804966.

引用本文的文献

1
An individual-specific understanding of how synchrony becomes curative: study protocol.关于同步性如何产生治疗效果的个体特异性理解:研究方案。
BMC Psychiatry. 2025 Jun 6;25(1):587. doi: 10.1186/s12888-025-06539-3.
2
The Mandarin Chinese auditory emotions stimulus database: A validated corpus of monosyllabic Chinese characters.汉语听觉情绪刺激数据库:一个经过验证的单音节汉字语料库。
Behav Res Methods. 2025 Feb 3;57(3):89. doi: 10.3758/s13428-025-02607-4.
3
Instrumental music training relates to intensity assessment but not emotional prosody recognition in Mandarin.
器乐训练与普通话的强度评估有关,但与情感韵律识别无关。
PLoS One. 2024 Aug 30;19(8):e0309432. doi: 10.1371/journal.pone.0309432. eCollection 2024.
4
The role of the age and gender, and the complexity of the syntactic unit in the perception of affective emotions in voice.年龄、性别以及句法单元的复杂性在语音情感感知中的作用。
Codas. 2024 Jul 19;36(5):e20240009. doi: 10.1590/2317-1782/20242024009en. eCollection 2024.
5
Predicting pragmatic functions of Chinese echo questions using prosody: evidence from acoustic analysis and data modeling.利用韵律预测汉语回声问句的语用功能:来自声学分析和数据建模的证据
Front Psychol. 2024 Feb 27;15:1322482. doi: 10.3389/fpsyg.2024.1322482. eCollection 2024.
6
The Sound of Emotional Prosody: Nearly 3 Decades of Research and Future Directions.情感韵律之声:近三十年的研究与未来方向
Perspect Psychol Sci. 2024 Jan 17:17456916231217722. doi: 10.1177/17456916231217722.
7
The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.《耶拿情感伪语音变声视听刺激库(JAVMEPS)》:一个包含情感纯听觉、纯视觉以及与声音强度变化的语音和动态面部相匹配和不匹配的视听声音刺激的数据库。
Behav Res Methods. 2024 Aug;56(5):5103-5115. doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.
8
AffectMachine-Classical: a novel system for generating affective classical music.情感机器 - 古典音乐:一种用于生成情感古典音乐的新型系统。
Front Psychol. 2023 Jun 6;14:1158172. doi: 10.3389/fpsyg.2023.1158172. eCollection 2023.
9
Body exposure and vocal analysis: validation of fundamental frequency as a correlate of emotional arousal and valence.身体暴露与声音分析:验证基频作为情绪唤起和效价的相关指标
Front Psychiatry. 2023 May 24;14:1087548. doi: 10.3389/fpsyt.2023.1087548. eCollection 2023.
10
Cortical haemodynamic responses predict individual ability to recognise vocal emotions with uninformative pitch cues but do not distinguish different emotions.皮质血流动力学反应可以预测个体识别带有非信息性音高线索的声音情绪的能力,但无法区分不同的情绪。
Hum Brain Mapp. 2023 Jun 15;44(9):3684-3705. doi: 10.1002/hbm.26305. Epub 2023 May 10.