Sun Congshan, Li Haifeng, Ma Lin
Faculty of Computing, Harbin Institute of Technology, Harbin, China.
Front Psychol. 2023 Jan 9;13:1075624. doi: 10.3389/fpsyg.2022.1075624. eCollection 2022.
Speech emotion recognition (SER) is the key to human-computer emotion interaction. However, the nonlinear characteristics of speech emotion are variable, complex, and subtly changing. Therefore, accurate recognition of emotions from speech remains a challenge. Empirical mode decomposition (EMD), as an effective decomposition method for nonlinear non-stationary signals, has been successfully used to analyze emotional speech signals. However, the mode mixing problem of EMD affects the performance of EMD-based methods for SER. Various improved methods for EMD have been proposed to alleviate the mode mixing problem. These improved methods still suffer from the problems of mode mixing, residual noise, and long computation time, and their main parameters cannot be set adaptively. To overcome these problems, we propose a novel SER framework, named IMEMD-CRNN, based on the combination of an improved version of the masking signal-based EMD (IMEMD) and convolutional recurrent neural network (CRNN). First, IMEMD is proposed to decompose speech. IMEMD is a novel disturbance-assisted EMD method and can determine the parameters of masking signals to the nature of signals. Second, we extract the 43-dimensional time-frequency features that can characterize the emotion from the intrinsic mode functions (IMFs) obtained by IMEMD. Finally, we input these features into a CRNN network to recognize emotions. In the CRNN, 2D convolutional neural networks (CNN) layers are used to capture nonlinear local temporal and frequency information of the emotional speech. Bidirectional gated recurrent units (BiGRU) layers are used to learn the temporal context information further. Experiments on the publicly available TESS dataset and Emo-DB dataset demonstrate the effectiveness of our proposed IMEMD-CRNN framework. The TESS dataset consists of 2,800 utterances containing seven emotions recorded by two native English speakers. The Emo-DB dataset consists of 535 utterances containing seven emotions recorded by ten native German speakers. The proposed IMEMD-CRNN framework achieves a state-of-the-art overall accuracy of 100% for the TESS dataset over seven emotions and 93.54% for the Emo-DB dataset over seven emotions. The IMEMD alleviates the mode mixing and obtains IMFs with less noise and more physical meaning with significantly improved efficiency. Our IMEMD-CRNN framework significantly improves the performance of emotion recognition.
语音情感识别(SER)是人机情感交互的关键。然而,语音情感的非线性特征具有多变性、复杂性和微妙的变化性。因此,从语音中准确识别情感仍然是一项挑战。经验模态分解(EMD)作为一种用于非线性非平稳信号的有效分解方法,已成功用于分析情感语音信号。然而,EMD的模态混叠问题影响了基于EMD的SER方法的性能。人们提出了各种EMD的改进方法来缓解模态混叠问题。这些改进方法仍然存在模态混叠、残余噪声和计算时间长的问题,并且其主要参数不能自适应设置。为了克服这些问题,我们提出了一种基于基于掩蔽信号的EMD改进版本(IMEMD)和卷积循环神经网络(CRNN)相结合的新型SER框架,称为IMEMD-CRNN。首先,提出IMEMD来分解语音。IMEMD是一种新型的干扰辅助EMD方法,可以根据信号的性质确定掩蔽信号的参数。其次,我们从IMEMD获得的本征模态函数(IMF)中提取能够表征情感的43维时频特征。最后,我们将这些特征输入到CRNN网络中进行情感识别。在CRNN中,二维卷积神经网络(CNN)层用于捕获情感语音的非线性局部时间和频率信息。双向门控循环单元(BiGRU)层用于进一步学习时间上下文信息。在公开可用的TESS数据集和Emo-DB数据集上进行的实验证明了我们提出的IMEMD-CRNN框架的有效性。TESS数据集由2800个话语组成,包含由两位以英语为母语的人录制的七种情感。Emo-DB数据集由535个话语组成,包含由十位以德语为母语的人录制的七种情感。所提出的IMEMD-CRNN框架在TESS数据集上对七种情感的总体准确率达到了100%的先进水平,在Emo-DB数据集上对七种情感的总体准确率达到了93.54%。IMEMD缓解了模态混叠,获得了噪声更少、物理意义更强的IMF,效率显著提高。我们的IMEMD-CRNN框架显著提高了情感识别的性能。