Suppr超能文献

基于物联网技术的多模态感知场景构建在高校音乐教学中的应用

Application of multimodal perception scenario construction based on IoT technology in university music teaching.

作者信息

Gao Yuexia

机构信息

Music College, Hubei Normal University, Huangshi, Hubei, China.

出版信息

PeerJ Comput Sci. 2023 Oct 25;9:e1602. doi: 10.7717/peerj-cs.1602. eCollection 2023.

Abstract

In the contemporary landscape of diversified talent cultivation, enhancing education through intelligent means and expediting the process of talent development stand as paramount pursuits. Within the domain of instrumental music education, beyond merely listening to student performances, it becomes imperative to assess their movements, thus furnishing additional insights to fuel their subsequent growth. This article introduces a novel multimodal information fusion evaluation approach, combining sound information and movement data to address the challenge of evaluating students' learning status in college music instruction. The proposed framework leverages Internet of Things (IoT) technology, utilizing strategically positioned microphones and cameras within the local area network to accomplish data acquisition. Sound feature extraction is accomplished through the employment of Mel-scale frequency cepstral coefficients (MFCC), while the OpenPose framework in deep learning and convolutional neural networks (CNN) is harnessed to extract action features during students' performances. Subsequently, the fusion of feature layers is achieved through CNN, culminating in the evaluation of students' academic efficacy, facilitated by a fully connected network (FCN) and an activation function. In comparison to evaluations conducted by the teacher in the class, this approach achieves an impressive accuracy of 95.7% across the three categories of Excellent, Good, and Failed students' evaluation processes. This breakthrough offers novel insights for the future of music teaching and interactive class evaluations while expanding the horizons of multimodal information fusion methods' applications.

摘要

在当代多元化人才培养的格局中,通过智能手段加强教育并加速人才培养进程是首要追求。在器乐教育领域,除了聆听学生的演奏之外,评估他们的动作也变得至关重要,从而为他们后续的成长提供更多见解。本文介绍了一种新颖的多模态信息融合评估方法,将声音信息和动作数据相结合,以应对高校音乐教学中评估学生学习状况的挑战。所提出的框架利用物联网(IoT)技术,在局域网内战略性地布置麦克风和摄像头来完成数据采集。声音特征提取通过使用梅尔频率倒谱系数(MFCC)来实现,而深度学习中的OpenPose框架和卷积神经网络(CNN)则用于在学生表演期间提取动作特征。随后,通过CNN实现特征层的融合,最终由全连接网络(FCN)和激活函数完成对学生学业效能的评估。与教师在课堂上进行的评估相比,这种方法在优秀、良好和不及格这三类学生评估过程中的准确率达到了令人印象深刻的95.7%。这一突破为音乐教学和互动课堂评估的未来提供了新的见解,同时拓展了多模态信息融合方法的应用范围。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d78/10703108/a5dfa6696168/peerj-cs-09-1602-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验