• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FewJoint:用于联合对话理解的少样本学习

FewJoint: few-shot learning for joint dialogue understanding.

作者信息

Hou Yutai, Wang Xinghao, Chen Cheng, Li Bohan, Che Wanxiang, Chen Zhigang

机构信息

Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin, China.

State Key Laboratory of Cognitive Intelligence, Hefei, China.

出版信息

Int J Mach Learn Cybern. 2022;13(11):3409-3423. doi: 10.1007/s13042-022-01604-9. Epub 2022 Jul 18.

DOI:10.1007/s13042-022-01604-9
PMID:35874622
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9294856/
Abstract

Few-shot learning (FSL) is one of the key future steps in machine learning and raises a lot of attention. In this paper, we focus on the FSL problem of dialogue understanding, which contains two closely related tasks: intent detection and slot filling. Dialogue understanding has been proven to benefit a lot from jointly learning the two sub-tasks. However, such joint learning becomes challenging in the few-shot scenarios: on the one hand, the sparsity of samples greatly magnifies the difficulty of modeling the connection between the two tasks; on the other hand, how to jointly learn multiple tasks in the few-shot setting is still less investigated. In response to this, we introduce FewJoint, the first FSL benchmark for joint dialogue understanding. FewJoint provides a new corpus with 59 different dialogue domains from real industrial API and a code platform to ease FSL experiment set-up, which are expected to advance the research of this field. Further, we find that insufficient performance of the few-shot setting often leads to noisy sharing between two sub-task and disturbs joint learning. To tackle this, we guide slot with explicit intent information and propose a novel trust gating mechanism that blocks low-confidence intent information to ensure high quality sharing. Besides, we introduce a Reptile-based meta-learning strategy to achieve better generalization in unseen few-shot domains. In the experiments, the proposed method brings significant improvements on two datasets and achieve new state-of-the-art performance.

摘要

少样本学习(FSL)是机器学习未来的关键步骤之一,引起了广泛关注。在本文中,我们关注对话理解的少样本学习问题,它包含两个密切相关的任务:意图检测和槽填充。事实证明,对话理解通过联合学习这两个子任务受益匪浅。然而,在少样本场景中,这种联合学习变得具有挑战性:一方面,样本的稀疏性极大地增加了对两个任务之间联系进行建模的难度;另一方面,在少样本设置下如何联合学习多个任务仍较少被研究。针对此,我们引入了FewJoint,这是首个用于联合对话理解的少样本学习基准。FewJoint提供了一个新的语料库,包含来自真实工业应用程序编程接口的59个不同对话领域以及一个代码平台,以简化少样本学习实验设置,有望推动该领域的研究。此外,我们发现少样本设置下性能不足通常会导致两个子任务之间的噪声共享,并干扰联合学习。为解决此问题,我们用明确的意图信息引导槽填充,并提出一种新颖的信任门控机制,该机制会阻止低置信度的意图信息,以确保高质量的共享。此外,我们引入一种基于Reptile的元学习策略,以在未见过的少样本领域中实现更好的泛化。在实验中,所提出的方法在两个数据集上带来了显著改进,并取得了新的最优性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/8adf2767eeb0/13042_2022_1604_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/34928173fc3e/13042_2022_1604_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/39013d8265fb/13042_2022_1604_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/3b716b8cf16c/13042_2022_1604_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/8adf2767eeb0/13042_2022_1604_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/34928173fc3e/13042_2022_1604_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/39013d8265fb/13042_2022_1604_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/3b716b8cf16c/13042_2022_1604_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8648/9294856/8adf2767eeb0/13042_2022_1604_Fig3_HTML.jpg

相似文献

1
FewJoint: few-shot learning for joint dialogue understanding.FewJoint:用于联合对话理解的少样本学习
Int J Mach Learn Cybern. 2022;13(11):3409-3423. doi: 10.1007/s13042-022-01604-9. Epub 2022 Jul 18.
2
Multi-Learner Based Deep Meta-Learning for Few-Shot Medical Image Classification.基于多学习者的深度元学习用于少样本医学图像分类
IEEE J Biomed Health Inform. 2023 Jan;27(1):17-28. doi: 10.1109/JBHI.2022.3215147. Epub 2023 Jan 5.
3
Meta-Prototypical Learning for Domain-Agnostic Few-Shot Recognition.用于领域无关少样本识别的元原型学习
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6990-6996. doi: 10.1109/TNNLS.2021.3083650. Epub 2022 Oct 27.
4
A comparison of few-shot and traditional named entity recognition models for medical text.医学文本的少样本与传统命名实体识别模型比较
Proc (IEEE Int Conf Healthc Inform). 2022 Jun;2022:84-89. doi: 10.1109/ichi54592.2022.00024. Epub 2022 Sep 8.
5
Few-shot learning based on deep learning: A survey.基于深度学习的少样本学习:一项综述。
Math Biosci Eng. 2024 Jan;21(1):679-711. doi: 10.3934/mbe.2024029. Epub 2022 Dec 19.
6
A Multitask Latent Feature Augmentation Method for Few-Shot Learning.一种用于少样本学习的多任务潜在特征增强方法。
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6976-6990. doi: 10.1109/TNNLS.2022.3213576. Epub 2024 May 2.
7
Few-Shot Learning With a Strong Teacher.借助强大教师的少样本学习。
IEEE Trans Pattern Anal Mach Intell. 2024 Mar;46(3):1425-1440. doi: 10.1109/TPAMI.2022.3160362. Epub 2024 Feb 6.
8
How to Trust Unlabeled Data? Instance Credibility Inference for Few-Shot Learning.如何信任未标记的数据?小样本学习中的实例可信度推断。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6240-6253. doi: 10.1109/TPAMI.2021.3086140. Epub 2022 Sep 14.
9
Generalized Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data.广义元FDMixup:由标记目标数据引导的跨域少样本学习
IEEE Trans Image Process. 2022;31:7078-7090. doi: 10.1109/TIP.2022.3219237. Epub 2022 Nov 14.
10
A few-shot disease diagnosis decision making model based on meta-learning for general practice.基于元学习的全科医学少量样本疾病诊断决策模型。
Artif Intell Med. 2024 Jan;147:102718. doi: 10.1016/j.artmed.2023.102718. Epub 2023 Nov 17.