• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

跨语言自然语言理解的车载对话交互框架。

An Interactive Framework of Cross-Lingual NLU for In-Vehicle Dialogue.

机构信息

School of Artificial Intelligence and Big Data, Hefei University, Hefei 230061, China.

出版信息

Sensors (Basel). 2023 Oct 16;23(20):8501. doi: 10.3390/s23208501.

DOI:10.3390/s23208501
PMID:37896594
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10611118/
Abstract

As globalization accelerates, the linguistic diversity and semantic complexity of in-vehicle communication is increasing. In order to meet the needs of different language speakers, this paper proposes an interactive attention-based contrastive learning framework (IABCL) for the field of in-vehicle dialogue, aiming to effectively enhance cross-lingual natural language understanding (NLU). The proposed framework aims to address the challenges of cross-lingual interaction in in-vehicle dialogue systems and provide an effective solution. IABCL is based on a contrastive learning and attention mechanism. First, contrastive learning is applied in the encoder stage. Positive and negative samples are used to allow the model to learn different linguistic expressions of similar meanings. Its main role is to improve the cross-lingual learning ability of the model. Second, the attention mechanism is applied in the decoder stage. By articulating slots and intents with each other, it allows the model to learn the relationship between the two, thus improving the ability of natural language understanding in languages of the same language family. In addition, this paper constructed a multilingual in-vehicle dialogue (MIvD) dataset for experimental evaluation to demonstrate the effectiveness and accuracy of the IABCL framework in cross-lingual dialogue. With the framework studied in this paper, IABCL improves by 2.42% in intent, 1.43% in slot, and 2.67% in overall when compared with the latest model.

摘要

随着全球化的加速,车内通信的语言多样性和语义复杂性不断增加。为了满足不同语言使用者的需求,本文提出了一种基于交互注意力的对比学习框架(IABCL),用于车内对话领域,旨在有效增强跨语言自然语言理解(NLU)。所提出的框架旨在解决车内对话系统中跨语言交互的挑战,并提供有效的解决方案。IABCL 基于对比学习和注意力机制。首先,在编码器阶段应用对比学习。使用正例和反例,使模型能够学习相似含义的不同语言表达。其主要作用是提高模型的跨语言学习能力。其次,在解码器阶段应用注意力机制。通过槽和意图之间的相互作用,使模型能够学习两者之间的关系,从而提高同语系语言的自然语言理解能力。此外,本文构建了一个多语言车内对话(MIvD)数据集进行实验评估,以证明 IABCL 框架在跨语言对话中的有效性和准确性。使用本文研究的框架,与最新模型相比,意图提高了 2.42%,插槽提高了 1.43%,整体提高了 2.67%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ff407cc6521c/sensors-23-08501-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ea1f94882861/sensors-23-08501-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/bc68750a9dce/sensors-23-08501-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/f999ac5d4fad/sensors-23-08501-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ff407cc6521c/sensors-23-08501-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ea1f94882861/sensors-23-08501-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/bc68750a9dce/sensors-23-08501-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/f999ac5d4fad/sensors-23-08501-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/50ae/10611118/ff407cc6521c/sensors-23-08501-g004.jpg

相似文献

1
An Interactive Framework of Cross-Lingual NLU for In-Vehicle Dialogue.跨语言自然语言理解的车载对话交互框架。
Sensors (Basel). 2023 Oct 16;23(20):8501. doi: 10.3390/s23208501.
2
HC L: Hybrid and Cooperative Contrastive Learning for Cross-Lingual Spoken Language Understanding.HCL:用于跨语言口语理解的混合协作对比学习
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):8094-8105. doi: 10.1109/TPAMI.2024.3402746. Epub 2024 Nov 6.
3
Co-Guiding for Multi-Intent Spoken Language Understanding.用于多意图口语语言理解的协同引导
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2965-2980. doi: 10.1109/TPAMI.2023.3336709. Epub 2024 Apr 3.
4
A Study on the Impacts of Slot Types and Training Data on Joint Natural Language Understanding in a Spanish Medication Management Assistant Scenario.西班牙药物管理助手场景下插槽类型和训练数据对联合自然语言理解的影响研究
Sensors (Basel). 2022 Mar 18;22(6):2364. doi: 10.3390/s22062364.
5
Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations.基于上下文词表示的无监督跨语言命名实体识别模型迁移。
PLoS One. 2021 Sep 21;16(9):e0257230. doi: 10.1371/journal.pone.0257230. eCollection 2021.
6
Multi-level multilingual semantic alignment for zero-shot cross-lingual transfer learning.多层次多语言语义对齐的零镜头跨语言迁移学习。
Neural Netw. 2024 May;173:106217. doi: 10.1016/j.neunet.2024.106217. Epub 2024 Feb 27.
7
On cross-lingual retrieval with multilingual text encoders.关于使用多语言文本编码器进行跨语言检索。
Inf Retr Boston. 2022;25(2):149-183. doi: 10.1007/s10791-022-09406-x. Epub 2022 Mar 7.
8
What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability.什么样的语言更容易学?一项关于系统性结构和社群规模如何影响语言可学度的预先注册研究。
Cognition. 2021 May;210:104620. doi: 10.1016/j.cognition.2021.104620. Epub 2021 Feb 8.
9
Interactive Dual Attention Network for Text Sentiment Classification.用于文本情感分类的交互式双注意力网络。
Comput Intell Neurosci. 2020 Nov 3;2020:8858717. doi: 10.1155/2020/8858717. eCollection 2020.
10
Context-Fused Guidance for Image Captioning Using Sequence-Level Training.基于序列级训练的上下文融合图像字幕生成
Comput Intell Neurosci. 2022 Jan 5;2022:9743123. doi: 10.1155/2022/9743123. eCollection 2022.