Suppr超能文献

跨语言自然语言理解的车载对话交互框架。

An Interactive Framework of Cross-Lingual NLU for In-Vehicle Dialogue.

机构信息

School of Artificial Intelligence and Big Data, Hefei University, Hefei 230061, China.

出版信息

Sensors (Basel). 2023 Oct 16;23(20):8501. doi: 10.3390/s23208501.

Abstract

As globalization accelerates, the linguistic diversity and semantic complexity of in-vehicle communication is increasing. In order to meet the needs of different language speakers, this paper proposes an interactive attention-based contrastive learning framework (IABCL) for the field of in-vehicle dialogue, aiming to effectively enhance cross-lingual natural language understanding (NLU). The proposed framework aims to address the challenges of cross-lingual interaction in in-vehicle dialogue systems and provide an effective solution. IABCL is based on a contrastive learning and attention mechanism. First, contrastive learning is applied in the encoder stage. Positive and negative samples are used to allow the model to learn different linguistic expressions of similar meanings. Its main role is to improve the cross-lingual learning ability of the model. Second, the attention mechanism is applied in the decoder stage. By articulating slots and intents with each other, it allows the model to learn the relationship between the two, thus improving the ability of natural language understanding in languages of the same language family. In addition, this paper constructed a multilingual in-vehicle dialogue (MIvD) dataset for experimental evaluation to demonstrate the effectiveness and accuracy of the IABCL framework in cross-lingual dialogue. With the framework studied in this paper, IABCL improves by 2.42% in intent, 1.43% in slot, and 2.67% in overall when compared with the latest model.

摘要

随着全球化的加速,车内通信的语言多样性和语义复杂性不断增加。为了满足不同语言使用者的需求,本文提出了一种基于交互注意力的对比学习框架(IABCL),用于车内对话领域,旨在有效增强跨语言自然语言理解(NLU)。所提出的框架旨在解决车内对话系统中跨语言交互的挑战,并提供有效的解决方案。IABCL 基于对比学习和注意力机制。首先,在编码器阶段应用对比学习。使用正例和反例,使模型能够学习相似含义的不同语言表达。其主要作用是提高模型的跨语言学习能力。其次,在解码器阶段应用注意力机制。通过槽和意图之间的相互作用,使模型能够学习两者之间的关系,从而提高同语系语言的自然语言理解能力。此外,本文构建了一个多语言车内对话(MIvD)数据集进行实验评估,以证明 IABCL 框架在跨语言对话中的有效性和准确性。使用本文研究的框架,与最新模型相比,意图提高了 2.42%,插槽提高了 1.43%,整体提高了 2.67%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ea1f94882861/sensors-23-08501-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验