School of Marxism, School of Music and Dance, Henan Normal University, Xinxiang, Henan 453007, China.
Faculty of Education, Henan Normal University, Xinxiang, Henan 453007, China.
Comput Intell Neurosci. 2022 Aug 17;2022:5764148. doi: 10.1155/2022/5764148. eCollection 2022.
This work intends to classify and integrate music genres and emotions to improve the quality of music education. This work proposes a web image education resource retrieval method based on semantic network and interactive image filtering for a music education environment. It makes a judgment on these music source data and then uses these extracted feature sequences as the emotions expressed in the model of the combination of Long Short-Term Memory (LSTM) and Attention Mechanism (AM), thus judging the emotion category of music. The emotion recognition accuracy has increased after improving LSTM-AM into the BiGR-AM model. The greater the difference between emotion genres is, the easier it is to analyze the feature sequence containing emotion features, and the higher the recognition accuracy is. The classification accuracy of the excited, relieved, relaxed, and sad emotions can reach 76.5%, 71.3%, 80.8%, and 73.4%, respectively. The proposed interactive filtering method based on a Convolutional Recurrent Neural Network can effectively classify and integrate music resources to improve the quality of music education.
本工作旨在对音乐类型和情感进行分类和整合,以提高音乐教育质量。本工作提出了一种基于语义网络和交互式图像过滤的网络图像教育资源检索方法,用于音乐教育环境。它对这些音乐源数据进行判断,然后使用这些提取的特征序列作为长短期记忆 (LSTM) 和注意力机制 (AM) 组合模型中表达的情感,从而判断音乐的情感类别。通过将 LSTM-AM 改进为 BiGR-AM 模型,情感识别精度得到了提高。情绪类型之间的差异越大,分析包含情绪特征的特征序列就越容易,识别精度也就越高。兴奋、缓解、放松和悲伤情绪的分类准确率分别可达 76.5%、71.3%、80.8%和 73.4%。所提出的基于卷积递归神经网络的交互式过滤方法可以有效地对音乐资源进行分类和整合,从而提高音乐教育质量。