Suppr超能文献

(未)听到快乐:使用机器学习从声学线索预测快乐情绪的波动。

(Not) hearing happiness: Predicting fluctuations in happy mood from acoustic cues using machine learning.

机构信息

Department of Psychology.

Department of People Management and Organisation, ESADE Business School.

出版信息

Emotion. 2020 Jun;20(4):642-658. doi: 10.1037/emo0000571. Epub 2019 Feb 11.

Abstract

Recent popular claims surrounding virtual assistants suggest that computers will soon be able to hear our emotions. Supporting this possibility, promising work has harnessed big data and emergent technologies to automatically predict stable levels of one specific emotion, happiness, at the community (e.g., counties) and trait (i.e., people) levels. Furthermore, research in affective science has shown that nonverbal vocal bursts (e.g., sighs, gasps) and specific acoustic features (e.g., pitch, energy) can differentiate between distinct emotions (e.g., anger, happiness) and that machine-learning algorithms can detect these differences. Yet, to our knowledge, no work has tested whether computers can automatically detect normal, everyday, within-person fluctuations in one emotional state from acoustic analysis. To address this issue in the context of happy mood, across 3 studies (total N = 20,197), we asked participants to repeatedly report their state happy mood and to provide audio recordings-including both direct speech and ambient sounds-from which we extracted acoustic features. Using three different machine learning algorithms (neural networks, random forests, and support vector machines) and two sets of acoustic features, we found that acoustic features yielded minimal predictive insight into happy mood above chance. Neither multilevel modeling analyses nor human coders provided additional insight into state happy mood. These findings suggest that it is not yet possible to automatically assess fluctuations in one emotional state (i.e., happy mood) from acoustic analysis, pointing to a critical future direction for affective scientists interested in acoustic analysis of emotion and automated emotion detection. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

摘要

最近有关虚拟助手的流行说法表明,计算机很快就能听到我们的情绪。有研究利用大数据和新兴技术,自动预测特定情绪(如幸福)在社区(如县)和特质(如人)层面上的稳定水平,为这一可能性提供了支持。此外,情感科学研究表明,非言语声音爆发(如叹气、喘息)和特定的声学特征(如音高、能量)可以区分不同的情绪(如愤怒、幸福),并且机器学习算法可以检测到这些差异。然而,据我们所知,还没有研究测试过计算机是否可以通过声学分析自动检测出一种情绪状态的正常、日常、个体内波动。为了在快乐情绪的背景下解决这个问题,我们在 3 项研究(共 20197 名参与者)中要求参与者反复报告他们的快乐情绪状态,并提供音频记录,包括直接语音和环境声音,我们从中提取了声学特征。我们使用了三种不同的机器学习算法(神经网络、随机森林和支持向量机)和两组声学特征,发现声学特征在预测快乐情绪方面的效果仅略高于随机猜测。多层次建模分析和人类编码员都没有提供对快乐情绪状态的额外见解。这些发现表明,目前还不可能通过声学分析自动评估一种情绪状态(即快乐情绪)的波动,这为对情感声学分析和自动情感检测感兴趣的情感科学家指出了一个未来的重要方向。(PsycInfo 数据库记录(c)2020 APA,保留所有权利)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验