Liu Miao, Wang Jing, Wang Fei, Xiang Fei, Chen Jingdong
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):174-187. doi: 10.1109/TNNLS.2023.3321076. Epub 2025 Jan 7.
Traditionally, speech quality evaluation relies on subjective assessments or intrusive methods that require reference signals or additional equipment. However, over recent years, non-intrusive speech quality assessment has emerged as a promising alternative, capturing much attention from researchers and industry professionals. This article presents a deep learning-based method that exploits large-scale intrusive simulated data to improve the accuracy and generalization of non-intrusive methods. The major contributions of this article are as follows. First, it presents a data simulation method, which generates degraded speech signals and labels their speech quality with the perceptual objective listening quality assessment (POLQA). The generated data is proven to be useful for pretraining the deep learning models. Second, it proposes to apply an adversarial speaker classifier to reduce the impact of speaker-dependent information on speech quality evaluation. Third, an autoencoder-based deep learning scheme is proposed following the principle of representation learning and adversarial training (AT) methods, which is able to transfer the knowledge learned from a large amount of simulated speech data labeled by POLQA. With the help of discriminative representations extracted from the autoencoder, the prediction model can be trained well on a relatively small amount of speech data labeled through subjective listening tests. Fourth, an end-to-end speech quality evaluation neural network is developed, which takes magnitude and phase spectral features as its inputs. This phase-aware model is more accurate than the model using only the magnitude spectral features. A large number of experiments are carried out with three datasets: one simulated with labels obtained using POLQA and two recorded with labels obtained using subjective listening tests. The results show that the presented phase-aware method improves the performance of the baseline model and the proposed model with latent representations extracted from the adversarial autoencoder (AAE) outperforms the state-of-the-art objective quality assessment methods, reducing the root mean square error (RMSE) by 10.5% and 12.2% on the Beijing Institute of Technology (BIT) dataset and Tencent Corpus, respectively. The code and supplementary materials are available at https://github.com/liushenme/AAE-SQA.
传统上,语音质量评估依赖于主观评估或需要参考信号或额外设备的侵入式方法。然而,近年来,非侵入式语音质量评估已成为一种有前途的替代方法,受到了研究人员和行业专业人士的广泛关注。本文提出了一种基于深度学习的方法,该方法利用大规模侵入式模拟数据来提高非侵入式方法的准确性和泛化能力。本文的主要贡献如下。首先,提出了一种数据模拟方法,该方法生成降级语音信号,并使用感知客观听力质量评估(POLQA)对其语音质量进行标注。实验证明,生成的数据对深度学习模型的预训练很有用。其次,提出应用对抗性说话人分类器来减少说话人相关信息对语音质量评估的影响。第三,遵循表征学习和对抗训练(AT)方法的原理,提出了一种基于自动编码器的深度学习方案,该方案能够转移从大量由POLQA标注的模拟语音数据中学到的知识。借助从自动编码器中提取的判别性表征,预测模型可以在通过主观听力测试标注的相对少量语音数据上得到良好训练。第四,开发了一种端到端语音质量评估神经网络,该网络以幅度和相位谱特征作为输入。这种相位感知模型比仅使用幅度谱特征的模型更准确。使用三个数据集进行了大量实验:一个是使用POLQA获得的标签进行模拟的,另外两个是使用主观听力测试获得的标签进行记录的。结果表明,所提出的相位感知方法提高了基线模型的性能,并且从对抗自动编码器(AAE)中提取潜在表征的所提模型优于当前最先进的客观质量评估方法,在北京理工大学(BIT)数据集和腾讯语料库上分别将均方根误差(RMSE)降低了10.5%和12.2%。代码和补充材料可在https://github.com/liushenme/AAE-SQA获取。