Lim Kian Ming, Lee Chin Poo, Lee Zhi Yang, Alqahtani Ali
Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia.
DZH International Sdn. Bhd., Kuala Lumpur 55100, Malaysia.
Sensors (Basel). 2023 Nov 10;23(22):9084. doi: 10.3390/s23229084.
Recent successes in deep learning have inspired researchers to apply deep neural networks to Acoustic Event Classification (AEC). While deep learning methods can train effective AEC models, they are susceptible to overfitting due to the models' high complexity. In this paper, we introduce EnViTSA, an innovative approach that tackles key challenges in AEC. EnViTSA combines an ensemble of Vision Transformers with SpecAugment, a novel data augmentation technique, to significantly enhance AEC performance. Raw acoustic signals are transformed into Log Mel-spectrograms using Short-Time Fourier Transform, resulting in a fixed-size spectrogram representation. To address data scarcity and overfitting issues, we employ SpecAugment to generate additional training samples through time masking and frequency masking. The core of EnViTSA resides in its ensemble of pre-trained Vision Transformers, harnessing the unique strengths of the Vision Transformer architecture. This ensemble approach not only reduces inductive biases but also effectively mitigates overfitting. In this study, we evaluate the EnViTSA method on three benchmark datasets: ESC-10, ESC-50, and UrbanSound8K. The experimental results underscore the efficacy of our approach, achieving impressive accuracy scores of 93.50%, 85.85%, and 83.20% on ESC-10, ESC-50, and UrbanSound8K, respectively. EnViTSA represents a substantial advancement in AEC, demonstrating the potential of Vision Transformers and SpecAugment in the acoustic domain.
深度学习领域最近取得的成功激发了研究人员将深度神经网络应用于声学事件分类(AEC)。虽然深度学习方法可以训练有效的AEC模型,但由于模型的高复杂性,它们容易出现过拟合问题。在本文中,我们介绍了EnViTSA,这是一种应对AEC关键挑战的创新方法。EnViTSA将视觉Transformer集成与SpecAugment(一种新颖的数据增强技术)相结合,以显著提高AEC性能。原始声学信号通过短时傅里叶变换转换为对数梅尔频谱图,从而得到固定大小的频谱图表示。为了解决数据稀缺和过拟合问题,我们采用SpecAugment通过时间掩蔽和频率掩蔽来生成额外的训练样本。EnViTSA的核心在于其预训练视觉Transformer的集成,利用了视觉Transformer架构的独特优势。这种集成方法不仅减少了归纳偏差,还有效减轻了过拟合。在本研究中,我们在三个基准数据集上评估了EnViTSA方法:ESC-10、ESC-50和UrbanSound8K。实验结果强调了我们方法的有效性,在ESC-10、ESC-50和UrbanSound8K上分别取得了令人印象深刻的准确率得分93.50%、85.85%和83.20%。EnViTSA代表了AEC领域的重大进步,展示了视觉Transformer和SpecAugment在声学领域的潜力。