Han Yupeng, Han Lin, Zeng Chun, Zhao Wei
Jiangxi University of Finance and Economics, Nanchang, China.
Nanchang Jiaotong Institute, Nanchang, China.
Sci Rep. 2025 Apr 9;15(1):12200. doi: 10.1038/s41598-025-97003-5.
Traditional music education in higher education institutions has traditionally followed a one-size-fits-all teaching model, which limits student interaction and hinders personalized learning. This approach does not align with the expectations of modern students, who seek a more engaging and effective learning experience. With the growing integration of Virtual Reality (VR) technology in education, its immersive and interactive features offer new possibilities for enhancing music instruction in colleges and universities. To explore these possibilities, this study proposes an Intelligent Interactive Music Teaching (IIMT) model that combines VR technology with Deep Convolutional Generative Adversarial Networks and Deep Deterministic Policy Gradient algorithms. The study utilizes publicly available music teaching videos and virtual environment interaction data. After applying data cleaning, noise reduction, and normalization techniques, the processed data is used to construct training and validation datasets. Experimental results indicate that the IIMT model generates images and audio with detail richness and clarity scores ranging from 0.7 to 1.0. The optimized system maintains a response time between 85 and 115 milliseconds and an average frame rate of 55 to 65 frames per second, ensuring smooth interaction. In a "vocal training" scenario, the IIMT model achieves an efficiency score of 0.96 and a task completion rate of 98.77%, demonstrating its effectiveness in improving instructional quality and enhancing students' learning experiences. These findings suggest that the IIMT model can serve as a valuable tool for educators and institutions seeking to modernize music education through interactive and intelligent teaching methodologies.
高等教育机构中的传统音乐教育一直遵循一刀切的教学模式,这限制了学生之间的互动,阻碍了个性化学习。这种方法不符合现代学生的期望,他们寻求更具吸引力和高效的学习体验。随着虚拟现实(VR)技术在教育领域的日益融合,其沉浸式和交互式功能为提高高校音乐教学提供了新的可能性。为了探索这些可能性,本研究提出了一种智能交互式音乐教学(IIMT)模型,该模型将VR技术与深度卷积生成对抗网络和深度确定性策略梯度算法相结合。该研究使用了公开可用的音乐教学视频和虚拟环境交互数据。在应用数据清理、降噪和归一化技术后,处理后的数据用于构建训练和验证数据集。实验结果表明,IIMT模型生成的图像和音频的细节丰富度和清晰度得分在0.7到1.0之间。优化后的系统保持85到115毫秒的响应时间和每秒55到65帧的平均帧率,确保交互流畅。在“声乐训练”场景中,IIMT模型的效率得分为0.96,任务完成率为98.77%,证明了其在提高教学质量和增强学生学习体验方面的有效性。这些发现表明,IIMT模型可以成为教育工作者和机构通过交互式和智能教学方法实现音乐教育现代化的宝贵工具。