Lee Geon Woo, Kim Hong Kook
AI Graduate School, Gwangju Institute of Science and Technology, Gwangju 61005, Republic of Korea.
School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Republic of Korea.
Sensors (Basel). 2024 Apr 17;24(8):2573. doi: 10.3390/s24082573.
This paper addresses a joint training approach applied to a pipeline comprising speech enhancement (SE) and automatic speech recognition (ASR) models, where an acoustic tokenizer is included in the pipeline to leverage the linguistic information from the ASR model to the SE model. The acoustic tokenizer takes the outputs of the ASR encoder and provides a pseudo-label through K-means clustering. To transfer the linguistic information, represented by pseudo-labels, from the acoustic tokenizer to the SE model, a cluster-based pairwise contrastive (CBPC) loss function is proposed, which is a self-supervised contrastive loss function, and combined with an information noise contrastive estimation (infoNCE) loss function. This combined loss function prevents the SE model from overfitting to outlier samples and represents the pronunciation variability in samples with the same pseudo-label. The effectiveness of the proposed CBPC loss function is evaluated on a noisy LibriSpeech dataset by measuring both the speech quality scores and the word error rate (WER). The experimental results reveal that the proposed joint training approach using the described CBPC loss function achieves a lower WER than the conventional joint training approaches. In addition, it is demonstrated that the speech quality scores of the SE model trained using the proposed training approach are higher than those of the standalone-SE model and SE models trained using conventional joint training approaches. An ablation study is also conducted to investigate the effects of different combinations of loss functions on the speech quality scores and WER. Here, it is revealed that the proposed CBPC loss function combined with infoNCE contributes to a reduced WER and an increase in most of the speech quality scores.
本文探讨了一种联合训练方法,该方法应用于一个由语音增强(SE)和自动语音识别(ASR)模型组成的流程,其中流程中包含一个声学分词器,用于将来自ASR模型的语言信息传递到SE模型。声学分词器获取ASR编码器的输出,并通过K均值聚类提供一个伪标签。为了将由伪标签表示的语言信息从声学分词器传递到SE模型,提出了一种基于聚类的成对对比(CBPC)损失函数,它是一种自监督对比损失函数,并与信息噪声对比估计(infoNCE)损失函数相结合。这种组合损失函数可防止SE模型过度拟合异常样本,并表示具有相同伪标签的样本中的发音变异性。通过测量语音质量分数和单词错误率(WER),在有噪声的LibriSpeech数据集上评估了所提出的CBPC损失函数的有效性。实验结果表明,使用所描述的CBPC损失函数的联合训练方法比传统的联合训练方法实现了更低的WER。此外,还证明了使用所提出的训练方法训练的SE模型的语音质量分数高于独立的SE模型以及使用传统联合训练方法训练的SE模型。还进行了一项消融研究,以研究损失函数的不同组合对语音质量分数和WER的影响。在此,结果表明,所提出的CBPC损失函数与infoNCE相结合有助于降低WER并提高大多数语音质量分数。