• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

手机录音与麦克风录音:判断声音中的情感。

Cell-phone vs microphone recordings: Judging emotion in the voice.

作者信息

Green Joshua J, Eigsti Inge-Marie

机构信息

Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, Connecticut 06269, USA.

出版信息

J Acoust Soc Am. 2017 Sep;142(3):1261. doi: 10.1121/1.5000482.

DOI:10.1121/1.5000482
PMID:28964104
Abstract

Emotional states can be conveyed by vocal cues such as pitch and intensity. Despite the ubiquity of cellular telephones, there is limited information on how vocal emotional states are perceived during cell-phone transmissions. Emotional utterances (neutral, happy, angry) were elicited from two female talkers and simultaneously recorded via microphone and cell-phone. Ten-step continua (neutral to happy, neutral to angry) were generated using the straight algorithm. Analyses compared reaction time (RT) and emotion judgment as a function of recording type (microphone vs cell-phone). Logistic regression revealed no judgment differences between recording types, though there were interactions with emotion type. Multi-level model analyses indicated that RT data were best fit by a quadratic model, with slower RT at the middle of each continuum, suggesting greater ambiguity, and slower RT for cell-phone stimuli across blocks. While preliminary, results suggest that critical acoustic cues to emotion are largely retained in cell-phone transmissions, though with effects of recording source on RT, and support the methodological utility of collecting speech samples by phone.

摘要

情绪状态可以通过音高和强度等声音线索来传达。尽管手机无处不在,但关于在手机通话过程中如何感知声音中的情绪状态的信息却很有限。从两名女性谈话者那里获取了情绪话语(中性、高兴、愤怒),并同时通过麦克风和手机进行记录。使用直线算法生成了十步连续体(从中性到高兴、从中性到愤怒)。分析比较了反应时间(RT)和作为记录类型(麦克风与手机)函数的情绪判断。逻辑回归显示记录类型之间没有判断差异,不过存在与情绪类型的交互作用。多层次模型分析表明,RT数据最适合二次模型,在每个连续体的中间反应时间较慢,这表明存在更大的模糊性,并且在各个组块中手机刺激的反应时间较慢。虽然是初步的,但结果表明情绪的关键声学线索在手机通话中基本得以保留,不过记录源对反应时间有影响,并且支持通过手机收集语音样本的方法学效用。

相似文献

1
Cell-phone vs microphone recordings: Judging emotion in the voice.手机录音与麦克风录音:判断声音中的情感。
J Acoust Soc Am. 2017 Sep;142(3):1261. doi: 10.1121/1.5000482.
2
Can you hear what I feel? A validated prosodic set of angry, happy, and neutral Italian pseudowords.你能听到我的感受吗?一组经过验证的意大利语愤怒、快乐和中性伪词的韵律集。
Behav Res Methods. 2016 Mar;48(1):259-71. doi: 10.3758/s13428-015-0570-7.
3
[The mutual interference of facial and vocal information in Chinese and Japanese people's perception of emotions].[中日人群情绪认知中面部与声音信息的相互干扰]
Shinrigaku Kenkyu. 2017 Apr;88(1):1-10. doi: 10.4992/jjpsy.88.15032.
4
Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.6 个月大的人类婴儿能否将情感信息(高兴或生气)从声音转移到面部表情上?一项眼动研究。
PLoS One. 2018 Apr 11;13(4):e0194579. doi: 10.1371/journal.pone.0194579. eCollection 2018.
5
(Not) hearing happiness: Predicting fluctuations in happy mood from acoustic cues using machine learning.(未)听到快乐:使用机器学习从声学线索预测快乐情绪的波动。
Emotion. 2020 Jun;20(4):642-658. doi: 10.1037/emo0000571. Epub 2019 Feb 11.
6
The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features.跨五个国家的声音中的情感表达和识别:基于声学特征的镜头模型分析。
J Pers Soc Psychol. 2016 Nov;111(5):686-705. doi: 10.1037/pspi0000066. Epub 2016 Aug 18.
7
Isolating N400 as neural marker of vocal anger processing in 6-11-year old children.在 6-11 岁儿童中分离 N400 作为言语愤怒加工的神经标记物。
Dev Cogn Neurosci. 2012 Apr;2(2):268-76. doi: 10.1016/j.dcn.2011.11.007. Epub 2011 Dec 7.
8
The voice of emotion: an FMRI study of neural responses to angry and happy vocal expressions.情感的声音:愤怒和高兴的声音表情的 fMRI 研究。
Soc Cogn Affect Neurosci. 2006 Dec;1(3):242-9. doi: 10.1093/scan/nsl027.
9
Intelligibility of emotional speech in younger and older adults.年轻人和老年人情感言语的可懂度。
Ear Hear. 2014 Nov-Dec;35(6):695-707. doi: 10.1097/AUD.0000000000000082.
10
Effects of cue modality and emotional category on recognition of nonverbal emotional signals in schizophrenia.线索模态和情绪类别对精神分裂症患者非言语情绪信号识别的影响。
BMC Psychiatry. 2016 Jul 7;16:218. doi: 10.1186/s12888-016-0913-7.