Suppr超能文献

双师++:利用可靠的迁移技术在有监督和无监督域中进行心脏分割

Dual-Teacher++: Exploiting Intra-Domain and Inter-Domain Knowledge With Reliable Transfer for Cardiac Segmentation.

出版信息

IEEE Trans Med Imaging. 2021 Oct;40(10):2771-2782. doi: 10.1109/TMI.2020.3038828. Epub 2021 Sep 30.

Abstract

Annotation scarcity is a long-standing problem in medical image analysis area. To efficiently leverage limited annotations, abundant unlabeled data are additionally exploited in semi-supervised learning, while well-established cross-modality data are investigated in domain adaptation. In this paper, we aim to explore the feasibility of concurrently leveraging both unlabeled data and cross-modality data for annotation-efficient cardiac segmentation. To this end, we propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++. Besides directly learning from limited labeled target domain data (e.g., CT) via a student model adopted by previous literature, we design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain. In this way, the dual teacher models would transfer acquired inter- and intra-domain knowledge to the student model for further integration and exploitation. Moreover, to encourag reliable dual-domain knowledge transfer, we enhance the inter-domain knowledge transfer on the samples with higher similarity to target domain after appearance alignment, and also strengthen intra-domain knowledge transfer of unlabeled target data with higher prediction confidence. In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data. We extensively evaluated the feasibility of our method on the MM-WHS 2017 challenge dataset. The experiments have demonstrated the superiority of our framework over other semi-supervised learning and domain adaptation methods. Moreover, our performance gains could be yielded in bidirections, i.e., adapting from MR to CT, and from CT to MR. Our code will be available at https://github.com/kli-lalala/Dual-Teacher-.

摘要

注释稀缺是医学图像分析领域长期存在的问题。为了有效地利用有限的注释,半监督学习中还额外利用了丰富的未标记数据,而在领域自适应中则研究了成熟的跨模态数据。在本文中,我们旨在探索同时利用未标记数据和跨模态数据进行高效注释的心脏分割的可行性。为此,我们提出了一种先进的半监督领域自适应框架,即 Dual-Teacher++。除了通过以前文献中采用的学生模型直接从有限的标记目标域数据(例如 CT)学习外,我们还设计了新颖的双教师模型,包括跨域教师模型,用于从源域(例如 MR)探索跨模态先验知识,以及域内教师模型,用于研究未标记目标域的知识。通过这种方式,双教师模型将所获取的跨域和域内知识传递给学生模型,以进一步进行整合和利用。此外,为了鼓励可靠的双域知识转移,我们在外观对齐后对与目标域更相似的样本进行增强跨域知识转移,并对具有更高预测置信度的未标记目标数据进行增强域内知识转移。通过这种方式,学生模型可以获得可靠的双域知识,并在目标域数据上取得更好的性能。我们在 MM-WHS 2017 挑战赛数据集上广泛评估了我们方法的可行性。实验证明了我们的框架优于其他半监督学习和领域自适应方法。此外,我们的性能提升可以在两个方向上实现,即从 MR 到 CT 的自适应,以及从 CT 到 MR 的自适应。我们的代码将在 https://github.com/kli-lalala/Dual-Teacher- 上提供。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验