School of Information Science and Technology, University of Science and Technology of China, Hefei 230022, China.
iFLYTEK Research, iFLYTEK Co., Ltd., Hefei 230088, China.
Sensors (Basel). 2020 Nov 27;20(23):6784. doi: 10.3390/s20236784.
Automatic speaker verification provides a flexible and effective way for biometric authentication. Previous deep learning-based methods have demonstrated promising results, whereas a few problems still require better solutions. In prior works examining speaker discriminative neural networks, the speaker representation of the target speaker is regarded as a fixed one when comparing with utterances from different speakers, and the joint information between enrollment and evaluation utterances is ignored. In this paper, we propose to combine CNN-based feature learning with a bidirectional attention mechanism to achieve better performance with only one enrollment utterance. The evaluation-enrollment joint information is exploited to provide interactive features through bidirectional attention. In addition, we introduce one individual cost function to identify the phonetic contents, which contributes to calculating the attention score more specifically. These interactive features are complementary to the constant ones, which are extracted from individual speakers separately and do not vary with the evaluation utterances. The proposed method archived a competitive equal error rate of 6.26% on the internal "DAN DAN NI HAO" benchmark dataset with 1250 utterances and outperformed various baseline methods, including the traditional i-vector/PLDA, d-vector, self-attention, and sequence-to-sequence attention models.
自动说话人验证为生物特征认证提供了一种灵活有效的方法。基于深度学习的方法已经取得了很有前景的成果,但仍有一些问题需要更好的解决方案。在先前研究说话人判别神经网络的工作中,当与来自不同说话人的话语进行比较时,目标说话人的说话人表示被视为固定的,并且忽略了注册和评估话语之间的联合信息。在本文中,我们提出结合基于 CNN 的特征学习和双向注意力机制,仅使用一个注册话语即可实现更好的性能。通过双向注意力利用评估-注册联合信息来提供交互特征。此外,我们引入了一个个体成本函数来识别语音内容,这有助于更具体地计算注意力得分。这些交互特征与从个体说话人分别提取的且不随评估话语变化的常数特征互补。在内部“DAN DAN NI HAO”基准数据集上,该方法在 1250 个话语上实现了具有竞争力的 6.26%的等错误率,优于各种基线方法,包括传统的 i-vector/PLDA、d-vector、自注意力和序列到序列注意力模型。