School of Music, Baotou Teachers' College, lnner Mongolia University of Science and Technology, Baotou, Inner Mongolia 014030, China.
Comput Intell Neurosci. 2022 Jun 21;2022:3920663. doi: 10.1155/2022/3920663. eCollection 2022.
In order to improve the accuracy of music emotion recognition and classification, this study combines an explicit sparse attention network with deep learning and proposes an effective emotion recognition and classification method for complex music data sets. First, the method uses fine-grained segmentation and other methods to preprocess the sample data set, so as to provide a high-quality input data sample set for the classification model. The explicit sparse attention network is introduced into the deep learning network to reduce the influence of irrelevant information on the recognition results and improve the emotion classification and recognition ability of music sample data set. The simulation experiment is based on the actual data set of the network. The experimental results show that the recognition accuracy of the proposed method is 0.71 for happy emotions and 0.688 for sad emotions. It has a good ability of music emotion recognition and classification.
为了提高音乐情感识别和分类的准确性,本研究结合显式稀疏注意力网络和深度学习,提出了一种针对复杂音乐数据集的有效情感识别和分类方法。首先,该方法使用细粒度分割等方法对样本数据集进行预处理,从而为分类模型提供高质量的输入数据样本集。将显式稀疏注意力网络引入到深度学习网络中,以减少无关信息对识别结果的影响,提高音乐样本数据集的情感分类和识别能力。仿真实验基于网络的实际数据集进行。实验结果表明,所提出的方法对快乐情绪的识别准确率为 0.71,对悲伤情绪的识别准确率为 0.688。它具有良好的音乐情感识别和分类能力。