Wang Fei, Fu Shuai, Abza Francis
School of educational science, Jilin Normal College of Engineering Technology, Jilin, 130052, Jilin, China.
Changchun Humanities and Sciences College, ChangChun, 130117, JiLin, China.
Heliyon. 2024 Jul 4;10(14):e34067. doi: 10.1016/j.heliyon.2024.e34067. eCollection 2024 Jul 30.
In this paper, a new approach has been introduced for classifying the music genres. The proposed approach involves transforming an audio signal into a unified representation known as a sound spectrum, from which texture features have been extracted using an enhanced Rigdelet Neural Network (RNN). Additionally, the RNN has been optimized using an improved version of the partial reinforcement effect optimizer (IPREO) that effectively avoids local optima and enhances the RNN's generalization capability. The GTZAN dataset has been utilized in experiments to assess the effectiveness of the proposed RNN/IPREO model for music genre classification. The results show an impressive accuracy of 92 % by incorporating a combination of spectral centroid, Mel-spectrogram, and Mel-frequency cepstral coefficients (MFCCs) as features. This performance significantly outperformed K-Means (58 %) and Support Vector Machines (up to 68 %). Furthermore, the RNN/IPREO model outshined various deep learning architectures such as Neural Networks (65 %), RNNs (84 %), CNNs (88 %), DNNs (86 %), VGG-16 (91 %), and ResNet-50 (90 %). It is worth noting that the RNN/IPREO model was able to achieve comparable results to well-known deep models like VGG-16, ResNet-50, and RNN-LSTM, sometimes even surpassing their scores. This highlights the strength of its hybrid CNN-Bi-directional RNN design in conjunction with the IPREO parameter optimization algorithm for extracting intricate and sequential auditory data.
本文介绍了一种用于音乐流派分类的新方法。所提出的方法包括将音频信号转换为一种统一的表示形式,即声谱,然后使用增强的Rigdelet神经网络(RNN)从中提取纹理特征。此外,RNN已使用部分强化效应优化器的改进版本(IPREO)进行了优化,该优化器有效避免了局部最优,并增强了RNN的泛化能力。实验中使用了GTZAN数据集来评估所提出的RNN/IPREO模型在音乐流派分类方面的有效性。结果表明,通过结合谱质心、梅尔频谱图和梅尔频率倒谱系数(MFCC)作为特征,准确率达到了令人印象深刻的92%。这一性能显著优于K均值算法(58%)和支持向量机(最高68%)。此外,RNN/IPREO模型优于各种深度学习架构,如神经网络(65%)、RNN(84%)、CNN(88%)、DNN(86%)、VGG-16(91%)和ResNet-50(90%)。值得注意的是,RNN/IPREO模型能够取得与VGG-16、ResNet-50和RNN-LSTM等知名深度模型相当的结果,有时甚至超过它们的分数。这突出了其混合CNN-双向RNN设计与IPREO参数优化算法相结合在提取复杂和序列听觉数据方面的优势。