Suppr超能文献

欢呼:丰富模型通过知识注入帮助贫困模型。

CHEER: Rich Model Helps Poor Model via Knowledge Infusion.

作者信息

Xiao Cao, Hoang Trong Nghia, Hong Shenda, Ma Tengfei, Sun Jimeng

机构信息

Analytics Center of Excellence, IQVIA, Cambridge, MA, 02139.

MIT-IBM Watson AI Lab, Cambridge, MA, 02142.

出版信息

IEEE Trans Knowl Data Eng. 2022 Feb;34(2):531-543. doi: 10.1109/tkde.2020.2989405. Epub 2020 Apr 22.

Abstract

There is a growing interest in applying deep learning (DL) to healthcare, driven by the availability of data with multiple feature channels in environments (e.g., intensive care units). However, in many other practical situations, we can only access data with much fewer feature channels in a environments (e.g., at home), which often results in predictive models with poor performance. How can we boost the performance of models learned from such environment by leveraging knowledge extracted from existing models trained using in a related environment? To address this question, we develop a knowledge infusion framework named CHEER that can succinctly summarize such into transferable representations, which can be incorporated into the to improve its performance. The infused model is analyzed theoretically and evaluated empirically on several datasets. Our empirical results showed that CHEER outperformed baselines by 5.60% to 46.80% in terms of the macro-F1 score on multiple physiological datasets.

摘要

受环境(如重症监护病房)中具有多特征通道的数据可用性的推动,将深度学习(DL)应用于医疗保健的兴趣日益浓厚。然而,在许多其他实际情况下,我们只能在环境(如在家中)中访问特征通道少得多的数据,这通常会导致预测模型性能不佳。我们如何通过利用从在相关环境中使用的数据训练的现有模型中提取的知识来提高从此类环境中学习的模型的性能?为了解决这个问题,我们开发了一个名为CHEER的知识注入框架,该框架可以简洁地将此类知识总结为可转移的表示形式,这些表示形式可以合并到模型中以提高其性能。对注入知识后的模型进行了理论分析,并在几个数据集上进行了实证评估。我们的实证结果表明,在多个生理数据集上,CHEER在宏F1分数方面比基线高出5.60%至46.80%。

相似文献

1
CHEER: Rich Model Helps Poor Model via Knowledge Infusion.欢呼:丰富模型通过知识注入帮助贫困模型。
IEEE Trans Knowl Data Eng. 2022 Feb;34(2):531-543. doi: 10.1109/tkde.2020.2989405. Epub 2020 Apr 22.
6
Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.知识引导的视觉问题推理:深度表示嵌入面临的挑战
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2758-2767. doi: 10.1109/TNNLS.2020.3045034. Epub 2022 Jul 6.

本文引用的文献

1
Beyond Sharing Weights for Deep Domain Adaptation.超越深度域适应中的权重共享
IEEE Trans Pattern Anal Mach Intell. 2019 Apr;41(4):801-814. doi: 10.1109/TPAMI.2018.2814042. Epub 2018 Mar 8.
2
Transferring Knowledge Fragments for Learning Distance Metric from a Heterogeneous Domain.从异构域转移知识片段以学习距离度量
IEEE Trans Pattern Anal Mach Intell. 2019 Apr;41(4):1013-1026. doi: 10.1109/TPAMI.2018.2824309. Epub 2018 Apr 9.
3
Visual and Semantic Knowledge Transfer for Large Scale Semi-Supervised Object Detection.用于大规模半监督目标检测的视觉与语义知识迁移
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):3045-3058. doi: 10.1109/TPAMI.2017.2771779. Epub 2017 Nov 9.
5
Joint Semantic and Latent Attribute Modelling for Cross-Class Transfer Learning.跨类迁移学习的联合语义和潜在属性建模。
IEEE Trans Pattern Anal Mach Intell. 2018 Jul;40(7):1625-1638. doi: 10.1109/TPAMI.2017.2723882. Epub 2017 Jul 6.
6
Domain Generalization and Adaptation Using Low Rank Exemplar SVMs.基于低秩范例支持向量机的领域泛化与自适应
IEEE Trans Pattern Anal Mach Intell. 2018 May;40(5):1114-1127. doi: 10.1109/TPAMI.2017.2704624. Epub 2017 May 16.
9
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.基于深度域自适应的弱监督细粒度视觉分类。
IEEE Trans Pattern Anal Mach Intell. 2018 May;40(5):1100-1113. doi: 10.1109/TPAMI.2016.2637331. Epub 2016 Dec 8.
10
The Temple University Hospital EEG Data Corpus.天普大学医院脑电图数据语料库。
Front Neurosci. 2016 May 13;10:196. doi: 10.3389/fnins.2016.00196. eCollection 2016.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验