• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习和显式稀疏注意力网络的音乐情感分类方法。

Music Emotion Classification Method Based on Deep Learning and Explicit Sparse Attention Network.

机构信息

School of Music, Baotou Teachers' College, lnner Mongolia University of Science and Technology, Baotou, Inner Mongolia 014030, China.

出版信息

Comput Intell Neurosci. 2022 Jun 21;2022:3920663. doi: 10.1155/2022/3920663. eCollection 2022.

DOI:10.1155/2022/3920663
PMID:35774442
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9239758/
Abstract

In order to improve the accuracy of music emotion recognition and classification, this study combines an explicit sparse attention network with deep learning and proposes an effective emotion recognition and classification method for complex music data sets. First, the method uses fine-grained segmentation and other methods to preprocess the sample data set, so as to provide a high-quality input data sample set for the classification model. The explicit sparse attention network is introduced into the deep learning network to reduce the influence of irrelevant information on the recognition results and improve the emotion classification and recognition ability of music sample data set. The simulation experiment is based on the actual data set of the network. The experimental results show that the recognition accuracy of the proposed method is 0.71 for happy emotions and 0.688 for sad emotions. It has a good ability of music emotion recognition and classification.

摘要

为了提高音乐情感识别和分类的准确性,本研究结合显式稀疏注意力网络和深度学习,提出了一种针对复杂音乐数据集的有效情感识别和分类方法。首先,该方法使用细粒度分割等方法对样本数据集进行预处理,从而为分类模型提供高质量的输入数据样本集。将显式稀疏注意力网络引入到深度学习网络中,以减少无关信息对识别结果的影响,提高音乐样本数据集的情感分类和识别能力。仿真实验基于网络的实际数据集进行。实验结果表明,所提出的方法对快乐情绪的识别准确率为 0.71,对悲伤情绪的识别准确率为 0.688。它具有良好的音乐情感识别和分类能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/0029a7d96cd6/CIN2022-3920663.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/f767e91e8262/CIN2022-3920663.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/5e25032ab268/CIN2022-3920663.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/4b8827e09283/CIN2022-3920663.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/b017b2096494/CIN2022-3920663.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/81434bd7a9d3/CIN2022-3920663.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/0e701bc39239/CIN2022-3920663.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/0029a7d96cd6/CIN2022-3920663.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/f767e91e8262/CIN2022-3920663.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/5e25032ab268/CIN2022-3920663.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/4b8827e09283/CIN2022-3920663.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/b017b2096494/CIN2022-3920663.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/81434bd7a9d3/CIN2022-3920663.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/0e701bc39239/CIN2022-3920663.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/93b1/9239758/0029a7d96cd6/CIN2022-3920663.007.jpg

相似文献

1
Music Emotion Classification Method Based on Deep Learning and Explicit Sparse Attention Network.基于深度学习和显式稀疏注意力网络的音乐情感分类方法。
Comput Intell Neurosci. 2022 Jun 21;2022:3920663. doi: 10.1155/2022/3920663. eCollection 2022.
2
Music Emotion Classification Method Based on Deep Learning and Improved Attention Mechanism.基于深度学习和改进注意力机制的音乐情感分类方法。
Comput Intell Neurosci. 2022 Jun 20;2022:5181899. doi: 10.1155/2022/5181899. eCollection 2022.
3
The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.音乐对自闭症谱系障碍儿童和发育正常儿童面部表情识别的影响
J Music Ther. 2017 Mar 1;54(1):55-79. doi: 10.1093/jmt/thw017.
4
Research on the Filtering and Classification Method of Interactive Music Education Resources Based on Neural Network.基于神经网络的互动音乐教育资源过滤与分类方法研究。
Comput Intell Neurosci. 2022 Aug 17;2022:5764148. doi: 10.1155/2022/5764148. eCollection 2022.
5
The time course of emotion recognition in speech and music.言语和音乐中的情绪识别时间进程。
J Acoust Soc Am. 2019 May;145(5):3058. doi: 10.1121/1.5108601.
6
Music emotion recognition based on temporal convolutional attention network using EEG.基于脑电图(EEG)的时间卷积注意力网络的音乐情感识别
Front Hum Neurosci. 2024 Mar 28;18:1324897. doi: 10.3389/fnhum.2024.1324897. eCollection 2024.
7
Music to my ears: Age-related decline in musical and facial emotion recognition.悦耳之音:与年龄相关的音乐和面部情绪识别能力下降。
Psychol Aging. 2017 Dec;32(8):698-709. doi: 10.1037/pag0000203.
8
EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare.脑电情绪识别在音乐对心理保健中情绪变化影响的分析中的应用。
Int J Environ Res Public Health. 2022 Dec 26;20(1):378. doi: 10.3390/ijerph20010378.
9
A Comparison Study of Deep Learning Methodologies for Music Emotion Recognition.深度学习方法在音乐情感识别中的比较研究。
Sensors (Basel). 2024 Mar 29;24(7):2201. doi: 10.3390/s24072201.
10
Algorithm Composition and Emotion Recognition Based on Machine Learning.基于机器学习的算法组合与情感识别。
Comput Intell Neurosci. 2022 Jun 6;2022:1092383. doi: 10.1155/2022/1092383. eCollection 2022.

引用本文的文献

1
Retracted: Music Emotion Classification Method Based on Deep Learning and Explicit Sparse Attention Network.撤回:基于深度学习和显式稀疏注意力网络的音乐情感分类方法。
Comput Intell Neurosci. 2023 Aug 2;2023:9858032. doi: 10.1155/2023/9858032. eCollection 2023.

本文引用的文献

1
Does surgical smoke matter?手术烟雾重要吗?
J Minim Invasive Surg. 2021 Mar 15;24(1):1-4. doi: 10.7602/jmis.2021.24.1.1.
2
Multi-Modal Song Mood Detection with Deep Learning.基于深度学习的多模态歌曲情绪检测。
Sensors (Basel). 2022 Jan 29;22(3):1065. doi: 10.3390/s22031065.
3
A Novel Music Emotion Recognition Model Using Neural Network Technology.一种使用神经网络技术的新型音乐情感识别模型。
Front Psychol. 2021 Sep 28;12:760060. doi: 10.3389/fpsyg.2021.760060. eCollection 2021.