• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于轻量级深度学习的 MIDI 格式爵士音乐生成方法。

A Lightweight Deep Learning-Based Approach for Jazz Music Generation in MIDI Format.

机构信息

Department of Computer Science and Engineering, Mahamaya Polytechnic of Information Technology (Govt.), Hathras, Uttar Pradesh 204102, India.

Department of Computer Science & Engineering, Sunder Deep Engineering College, Ghaziabad 201002, Uttar Pradesh, India.

出版信息

Comput Intell Neurosci. 2022 Aug 5;2022:2140895. doi: 10.1155/2022/2140895. eCollection 2022.

DOI:10.1155/2022/2140895
PMID:36035841
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9410918/
Abstract

In today's real-world, estimation of the level of difficulty of the musical is part of very meaningful musical learning. A musical learner cannot learn without a defined precise estimation. This problem is not very basic but it is complicated up to some extent because of the subjectivity of the contents and the scarcity of the data. In this paper, a lightweight model that generates original music content using deep learning along with generating music based on a specific genre is proposed. The paper discusses a lightweight deep learning-based approach for jazz music generation in MIDI format. In this work, the genre of music chosen is Jazz, and the songs selected are classical numbers composed by various artists. All the songs are in MIDI format and there might be differences in the pace or tone of the music. It is prudential to make sure that the chosen datasets that do not have these kinds of differences and are similar to the final output as desired. A model is trained to take in a part of a music file as input and should produce its continuation. The result generated should be similar to the dataset given as the input. Moreover, the proposed model also generates music using a particular instrument.

摘要

在当今的现实世界中,对音乐剧难度水平的估计是非常有意义的音乐学习的一部分。没有明确的精确估计,音乐学习者就无法学习。这个问题不是很基础,但由于内容的主观性和数据的稀缺性,它在某种程度上变得很复杂。本文提出了一种使用深度学习生成原创音乐内容并根据特定流派生成音乐的轻量级模型。本文讨论了一种基于轻量级深度学习的 MIDI 格式爵士音乐生成方法。在这项工作中,选择的音乐类型是爵士,选择的歌曲是由不同艺术家创作的经典曲目。所有的歌曲都是 MIDI 格式的,可能在音乐的节奏或音高上存在差异。谨慎起见,选择的数据集不应存在这些差异,并尽可能与最终输出相似。训练一个模型来输入音乐文件的一部分,并生成其延续。生成的结果应该与输入的数据集相似。此外,所提出的模型还可以使用特定的乐器生成音乐。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/20a11e96cc5f/CIN2022-2140895.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/3ff5fad18b40/CIN2022-2140895.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/257b39b66640/CIN2022-2140895.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/fe2fb435fc68/CIN2022-2140895.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/629addb4687d/CIN2022-2140895.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/20a11e96cc5f/CIN2022-2140895.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/3ff5fad18b40/CIN2022-2140895.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/257b39b66640/CIN2022-2140895.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/fe2fb435fc68/CIN2022-2140895.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/629addb4687d/CIN2022-2140895.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e22/9410918/20a11e96cc5f/CIN2022-2140895.005.jpg

相似文献

1
A Lightweight Deep Learning-Based Approach for Jazz Music Generation in MIDI Format.一种基于轻量级深度学习的 MIDI 格式爵士音乐生成方法。
Comput Intell Neurosci. 2022 Aug 5;2022:2140895. doi: 10.1155/2022/2140895. eCollection 2022.
2
A transformers-based approach for fine and coarse-grained classification and generation of MIDI songs and soundtracks.一种基于Transformer的方法,用于MIDI歌曲和音轨的细粒度和粗粒度分类及生成。
PeerJ Comput Sci. 2023 Jun 19;9:e1410. doi: 10.7717/peerj-cs.1410. eCollection 2023.
3
Creativity and personality in classical, jazz and folk musicians.古典、爵士和民间音乐家的创造力与个性。
Pers Individ Dif. 2014 Jun;63(100):117-121. doi: 10.1016/j.paid.2014.01.064.
4
The Classification of Music and Art Genres under the Visual Threshold of Deep Learning.深度学习视阈下的音乐艺术类型分类
Comput Intell Neurosci. 2022 May 18;2022:4439738. doi: 10.1155/2022/4439738. eCollection 2022.
5
The sound of music: differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm.音乐之声:使用快速、音乐多特征失配负波范式区分音乐家。
Neuropsychologia. 2012 Jun;50(7):1432-43. doi: 10.1016/j.neuropsychologia.2012.02.028. Epub 2012 Mar 6.
6
Construction of Intelligent Recognition and Learning Education Platform of National Music Genre Under Deep Learning.深度学习下民族音乐体裁智能识别与学习教育平台的构建
Front Psychol. 2022 May 26;13:843427. doi: 10.3389/fpsyg.2022.843427. eCollection 2022.
7
Musical preferences and learning outcome of medical students in cadaver dissection laboratory: A Nigerian survey.医学生在尸体解剖实验室的音乐偏好与学习成果:一项尼日利亚的调查。
Ann Anat. 2016 Nov;208:228-233. doi: 10.1016/j.aanat.2016.07.010. Epub 2016 Aug 6.
8
Cover versions as an impact indicator in popular music: A quantitative network analysis.封面版本作为流行音乐的影响力指标:一项定量网络分析。
PLoS One. 2021 Apr 19;16(4):e0250212. doi: 10.1371/journal.pone.0250212. eCollection 2021.
9
Multi-Modal Song Mood Detection with Deep Learning.基于深度学习的多模态歌曲情绪检测。
Sensors (Basel). 2022 Jan 29;22(3):1065. doi: 10.3390/s22031065.
10
Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features.古典、爵士和摇滚音乐家的听觉特征:对音乐声音特征的特定流派敏感性。
Front Psychol. 2016 Jan 7;6:1900. doi: 10.3389/fpsyg.2015.01900. eCollection 2015.

引用本文的文献

1
A transformers-based approach for fine and coarse-grained classification and generation of MIDI songs and soundtracks.一种基于Transformer的方法,用于MIDI歌曲和音轨的细粒度和粗粒度分类及生成。
PeerJ Comput Sci. 2023 Jun 19;9:e1410. doi: 10.7717/peerj-cs.1410. eCollection 2023.

本文引用的文献

1
On the Adaptability of Recurrent Neural Networks for Real-Time Jazz Improvisation Accompaniment.循环神经网络在实时爵士即兴伴奏中的适应性研究
Front Artif Intell. 2021 Feb 12;3:508727. doi: 10.3389/frai.2020.508727. eCollection 2020.