Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin 300050, China; Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Ishikawa 923-1292, Japan.
Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin 300050, China; Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Ishikawa 923-1292, Japan; Pengcheng Laboratory, Shenzhen 518055, China.
Neural Netw. 2021 Aug;140:261-273. doi: 10.1016/j.neunet.2021.03.027. Epub 2021 Mar 25.
Continuous dimensional emotion recognition from speech helps robots or virtual agents capture the temporal dynamics of a speaker's emotional state in natural human-robot interactions. Temporal modulation cues obtained directly from the time-domain model of auditory perception can better reflect temporal dynamics than the acoustic features usually processed in the frequency domain. Feature extraction, which can reflect temporal dynamics of emotion from temporal modulation cues, is challenging because of the complexity and diversity of the auditory perception model. A recent neuroscientific study suggests that human brains derive multi-resolution representations through temporal modulation analysis. This study investigates multi-resolution representations of an auditory perception model and proposes a novel feature called multi-resolution modulation-filtered cochleagram (MMCG) for predicting valence and arousal values of emotional primitives. The MMCG is constructed by combining four modulation-filtered cochleagrams at different resolutions to capture various temporal and contextual modulation information. In addition, to model the multi-temporal dependencies of the MMCG, we designed a parallel long short-term memory (LSTM) architecture. The results of extensive experiments on the RECOLA and SEWA datasets demonstrate that MMCG provides the best recognition performance in both datasets among all evaluated features. The results also show that the parallel LSTM can build multi-temporal dependencies from the MMCG features, and the performance on valence and arousal prediction is better than that of a plain LSTM method.
从语音中进行连续维度的情感识别可以帮助机器人或虚拟代理在自然的人机交互中捕捉说话者情感状态的时间动态。从听觉感知的时域模型中直接获得的时间调制线索比通常在频域中处理的声学特征更能反映时间动态。由于听觉感知模型的复杂性和多样性,从时间调制线索中提取能够反映情感时间动态的特征是具有挑战性的。最近的一项神经科学研究表明,人类大脑通过时间调制分析获得多分辨率表示。本研究探讨了听觉感知模型的多分辨率表示,并提出了一种新的特征,称为多分辨率调制滤波耳蜗图(MMCG),用于预测情感基元的效价和唤醒值。MMCG 通过组合四个不同分辨率的调制滤波耳蜗图来构建,以捕获各种时间和上下文调制信息。此外,为了对 MMCG 进行多时间依赖建模,我们设计了一种并行长短时记忆(LSTM)架构。在 RECOLA 和 SEWA 数据集上进行的广泛实验结果表明,在所有评估的特征中,MMCG 在两个数据集上都提供了最佳的识别性能。结果还表明,并行 LSTM 可以从 MMCG 特征中构建多时间依赖关系,在效价和唤醒预测方面的性能优于普通 LSTM 方法。