Suppr超能文献

用于语音情感识别的二元融合网络中的互相关注意因素

Mutual Correlation Attentive Factors in Dyadic Fusion Networks for Speech Emotion Recognition.

作者信息

Gu Yue, Lyu Xinyu, Sun Weijia, Li Weitian, Chen Shuhong, Li Xinyu, Ivan Marsic

机构信息

Rutgers University.

Amazon Inc., Rutgers University.

出版信息

Proc ACM Int Conf Multimed. 2019 Oct;2019:157-166. doi: 10.1145/3343031.3351039.

Abstract

Emotion recognition in dyadic communication is challenging because: 1. Extracting informative modality-specific representations requires disparate feature extractor designs due to the heterogenous input data formats. 2. How to effectively and efficiently fuse unimodal features and learn associations between dyadic utterances are critical to the model generalization in actual scenario. 3. Disagreeing annotations prevent previous approaches from precisely predicting emotions in context. To address the above issues, we propose an efficient dyadic fusion network that only relies on an attention mechanism to select representative vectors, fuse modality-specific features, and learn the sequence information. Our approach has three distinct characteristics: 1. Instead of using a recurrent neural network to extract temporal associations as in most previous research, we introduce multiple sub-view attention layers to compute the relevant dependencies among sequential utterances; this significantly improves model efficiency. 2. To improve fusion performance, we design a learnable mutual correlation factor inside each attention layer to compute associations across different modalities. 3. To overcome the label disagreement issue, we embed the labels from all annotators into a k-dimensional vector and transform the categorical problem into a regression problem; this method provides more accurate annotation information and fully uses the entire dataset. We evaluate the proposed model on two published multimodal emotion recognition datasets: IEMOCAP and MELD. Our model significantly outperforms previous state-of-the-art research by 3.8%-7.5% accuracy, using a more efficient model.

摘要

在二元交流中的情感识别具有挑战性,原因如下:1. 由于输入数据格式的异质性,提取特定模态的信息表示需要不同的特征提取器设计。2. 如何有效且高效地融合单模态特征并学习二元话语之间的关联,对于实际场景中的模型泛化至关重要。3. 注释不一致使得先前的方法无法在上下文中精确预测情感。为了解决上述问题,我们提出了一种高效的二元融合网络,该网络仅依赖注意力机制来选择代表性向量、融合特定模态特征并学习序列信息。我们的方法具有三个显著特点:1. 与大多数先前研究不同,我们不是使用递归神经网络来提取时间关联,而是引入多个子视图注意力层来计算连续话语之间的相关依赖关系;这显著提高了模型效率。2. 为了提高融合性能,我们在每个注意力层内部设计了一个可学习的互相关因子,以计算不同模态之间的关联。3. 为了克服标签不一致问题,我们将所有注释者的标签嵌入到一个k维向量中,并将分类问题转化为回归问题;这种方法提供了更准确的注释信息,并充分利用了整个数据集。我们在两个已发表的多模态情感识别数据集IEMOCAP和MELD上评估了所提出的模型。我们的模型使用更高效的模型,准确率显著超过先前的最先进研究3.8%-7.5%。

相似文献

2
Multimodal transformer augmented fusion for speech emotion recognition.用于语音情感识别的多模态变压器增强融合
Front Neurorobot. 2023 May 22;17:1181598. doi: 10.3389/fnbot.2023.1181598. eCollection 2023.

引用本文的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验