• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过用于音乐机器学习的具身接口理解音乐预测。

Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning.

作者信息

Martin Charles Patrick, Glette Kyrre, Nygaard Tønnes Frostad, Torresen Jim

机构信息

Research School of Computer Science, Australian National University, Canberra, ACT, Australia.

Department of Informatics, University of Oslo, Oslo, Norway.

出版信息

Front Artif Intell. 2020 Mar 3;3:6. doi: 10.3389/frai.2020.00006. eCollection 2020.

DOI:10.3389/frai.2020.00006
PMID:33733126
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7861300/
Abstract

Machine-learning models of music often exist outside the worlds of musical performance practice and abstracted from the physical gestures of musicians. In this work, we consider how a recurrent neural network (RNN) model of simple music gestures may be integrated into a physical instrument so that predictions are sonically and physically entwined with the performer's actions. We introduce EMPI, an embodied musical prediction interface that simplifies musical interaction and prediction to just one dimension of continuous input and output. The predictive model is a mixture density RNN trained to estimate the performer's next physical input action and the time at which this will occur. Predictions are represented sonically through synthesized audio, and physically with a motorized output indicator. We use EMPI to investigate how performers understand and exploit different predictive models to make music through a controlled study of performances with different models and levels of physical feedback. We show that while performers often favor a model trained on human-sourced data, they find different musical affordances in models trained on synthetic, and even random, data. Physical representation of predictions seemed to affect the length of performances. This work contributes new understandings of how musicians use generative ML models in real-time performance backed up by experimental evidence. We argue that a constrained musical interface can expose the affordances of embodied predictive interactions.

摘要

音乐的机器学习模型通常存在于音乐表演实践之外,并且与音乐家的身体动作相脱节。在这项研究中,我们探讨了如何将简单音乐手势的循环神经网络(RNN)模型集成到实体乐器中,以使预测在声音和物理层面上与演奏者的动作紧密相连。我们引入了EMPI,一种具身音乐预测接口,它将音乐交互和预测简化为连续输入和输出的单一维度。预测模型是一个混合密度RNN,经过训练以估计演奏者的下一个身体输入动作以及该动作发生的时间。预测通过合成音频在声音上进行呈现,并通过电动输出指示器在物理层面上进行展示。我们使用EMPI通过对不同模型和物理反馈水平的表演进行对照研究,来探究演奏者如何理解和利用不同的预测模型来创作音乐。我们发现,虽然演奏者通常青睐基于人类源数据训练的模型,但他们在基于合成甚至随机数据训练的模型中发现了不同的音乐特性。预测的物理呈现似乎影响了表演的时长。这项研究为音乐家如何在实时表演中使用生成式机器学习模型提供了新的理解,并得到了实验证据的支持。我们认为,一个受限的音乐接口可以揭示具身预测交互的特性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/8ac8ebc6488e/frai-03-00006-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/b9647a5bb67e/frai-03-00006-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/0207464de4f7/frai-03-00006-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/4e628ff34157/frai-03-00006-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/ef0b24517bc4/frai-03-00006-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/a886e9c85b2a/frai-03-00006-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/59ef2d520f6a/frai-03-00006-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/541aa32767a8/frai-03-00006-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/b7d90e6c84fe/frai-03-00006-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/9272099b4dde/frai-03-00006-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/e343bbf079ba/frai-03-00006-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/13479ef1da46/frai-03-00006-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/718ffdd46ed2/frai-03-00006-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/8ac8ebc6488e/frai-03-00006-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/b9647a5bb67e/frai-03-00006-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/0207464de4f7/frai-03-00006-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/4e628ff34157/frai-03-00006-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/ef0b24517bc4/frai-03-00006-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/a886e9c85b2a/frai-03-00006-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/59ef2d520f6a/frai-03-00006-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/541aa32767a8/frai-03-00006-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/b7d90e6c84fe/frai-03-00006-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/9272099b4dde/frai-03-00006-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/e343bbf079ba/frai-03-00006-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/13479ef1da46/frai-03-00006-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/718ffdd46ed2/frai-03-00006-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/de50/7861300/8ac8ebc6488e/frai-03-00006-g0013.jpg

相似文献

1
Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning.通过用于音乐机器学习的具身接口理解音乐预测。
Front Artif Intell. 2020 Mar 3;3:6. doi: 10.3389/frai.2020.00006. eCollection 2020.
2
Collaborative Musical Creativity: How Ensembles Coordinate Spontaneity.协作性音乐创作:合奏如何协调自发性。
Front Psychol. 2018 Jul 24;9:1285. doi: 10.3389/fpsyg.2018.01285. eCollection 2018.
3
Knowing too little or too much: the effects of familiarity with a co-performer's part on interpersonal coordination in musical ensembles. 了解过少或过多:对共同演奏者部分的熟悉程度对音乐合奏中人际协调的影响。
Front Psychol. 2013 Jun 25;4:368. doi: 10.3389/fpsyg.2013.00368. eCollection 2013.
4
Temporal Coordination in Piano Duet Networked Music Performance (NMP): Interactions Between Acoustic Transmission Latency and Musical Role Asymmetries.钢琴二重奏网络音乐表演(NMP)中的时间协调:声学传输延迟与音乐角色不对称之间的相互作用
Front Psychol. 2021 Sep 24;12:707090. doi: 10.3389/fpsyg.2021.707090. eCollection 2021.
5
New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.音乐中手势分类时机器学习的新接口与新方法。
Entropy (Basel). 2020 Dec 7;22(12):1384. doi: 10.3390/e22121384.
6
Exploring the Multi-Layered Affordances of Composing and Performing Interactive Music with Responsive Technologies.探索运用响应式技术创作和演奏交互式音乐的多层次功能。
Front Psychol. 2017 Sep 29;8:1701. doi: 10.3389/fpsyg.2017.01701. eCollection 2017.
7
Creativity in Generative Musical Networks: Evidence From Two Case Studies.生成式音乐网络中的创造力:来自两个案例研究的证据。
Front Robot AI. 2021 Aug 2;8:680586. doi: 10.3389/frobt.2021.680586. eCollection 2021.
8
Cross-modal interactions in the perception of musical performance.音乐表演感知中的跨模态交互。
Cognition. 2006 Aug;101(1):80-113. doi: 10.1016/j.cognition.2005.09.003. Epub 2005 Nov 9.
9
Action and familiarity effects on self and other expert musicians' Laban effort-shape analyses of expressive bodily behaviors in instrumental music performance: a case study approach.动作与熟悉度对自我及其他专业音乐家在器乐演奏中表达性身体行为的拉班动作分析的影响:一项案例研究方法
Front Psychol. 2014 Oct 29;5:1201. doi: 10.3389/fpsyg.2014.01201. eCollection 2014.
10
Classical creativity: A functional magnetic resonance imaging (fMRI) investigation of pianist and improviser Gabriela Montero.古典创造力:钢琴家和即兴演奏家加布里埃拉·蒙特罗的功能磁共振成像(fMRI)研究。
Neuroimage. 2020 Apr 1;209:116496. doi: 10.1016/j.neuroimage.2019.116496. Epub 2019 Dec 30.