Suppr超能文献

基于双向注意力的文本相关说话人验证。

Bidirectional Attention for Text-Dependent Speaker Verification.

机构信息

School of Information Science and Technology, University of Science and Technology of China, Hefei 230022, China.

iFLYTEK Research, iFLYTEK Co., Ltd., Hefei 230088, China.

出版信息

Sensors (Basel). 2020 Nov 27;20(23):6784. doi: 10.3390/s20236784.

Abstract

Automatic speaker verification provides a flexible and effective way for biometric authentication. Previous deep learning-based methods have demonstrated promising results, whereas a few problems still require better solutions. In prior works examining speaker discriminative neural networks, the speaker representation of the target speaker is regarded as a fixed one when comparing with utterances from different speakers, and the joint information between enrollment and evaluation utterances is ignored. In this paper, we propose to combine CNN-based feature learning with a bidirectional attention mechanism to achieve better performance with only one enrollment utterance. The evaluation-enrollment joint information is exploited to provide interactive features through bidirectional attention. In addition, we introduce one individual cost function to identify the phonetic contents, which contributes to calculating the attention score more specifically. These interactive features are complementary to the constant ones, which are extracted from individual speakers separately and do not vary with the evaluation utterances. The proposed method archived a competitive equal error rate of 6.26% on the internal "DAN DAN NI HAO" benchmark dataset with 1250 utterances and outperformed various baseline methods, including the traditional i-vector/PLDA, d-vector, self-attention, and sequence-to-sequence attention models.

摘要

自动说话人验证为生物特征认证提供了一种灵活有效的方法。基于深度学习的方法已经取得了很有前景的成果,但仍有一些问题需要更好的解决方案。在先前研究说话人判别神经网络的工作中,当与来自不同说话人的话语进行比较时,目标说话人的说话人表示被视为固定的,并且忽略了注册和评估话语之间的联合信息。在本文中,我们提出结合基于 CNN 的特征学习和双向注意力机制,仅使用一个注册话语即可实现更好的性能。通过双向注意力利用评估-注册联合信息来提供交互特征。此外,我们引入了一个个体成本函数来识别语音内容,这有助于更具体地计算注意力得分。这些交互特征与从个体说话人分别提取的且不随评估话语变化的常数特征互补。在内部“DAN DAN NI HAO”基准数据集上,该方法在 1250 个话语上实现了具有竞争力的 6.26%的等错误率,优于各种基线方法,包括传统的 i-vector/PLDA、d-vector、自注意力和序列到序列注意力模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b82/7730222/2915e11c0ba2/sensors-20-06784-g001.jpg

相似文献

4
Few-shot short utterance speaker verification using meta-learning.基于元学习的少样本短语音说话人验证
PeerJ Comput Sci. 2023 Apr 21;9:e1276. doi: 10.7717/peerj-cs.1276. eCollection 2023.
5
Learning speaker-specific characteristics with a deep neural architecture.利用深度神经架构学习特定说话者的特征。
IEEE Trans Neural Netw. 2011 Nov;22(11):1744-56. doi: 10.1109/TNN.2011.2167240. Epub 2011 Sep 26.
10
Partially supervised speaker clustering.部分监督的说话人聚类。
IEEE Trans Pattern Anal Mach Intell. 2012 May;34(5):959-71. doi: 10.1109/TPAMI.2011.174.

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验