Suppr超能文献

MHD-Net:具有缺失病理模态的胸腺瘤分型的记忆感知异模态蒸馏网络。

MHD-Net: Memory-Aware Hetero-Modal Distillation Network for Thymic Epithelial Tumor Typing With Missing Pathology Modality.

出版信息

IEEE J Biomed Health Inform. 2024 May;28(5):3003-3014. doi: 10.1109/JBHI.2024.3376462. Epub 2024 May 6.

Abstract

Fusing multi-modal radiology and pathology data with complementary information can improve the accuracy of tumor typing. However, collecting pathology data is difficult since it is high-cost and sometimes only obtainable after the surgery, which limits the application of multi-modal methods in diagnosis. To address this problem, we propose comprehensively learning multi-modal radiology-pathology data in training, and only using uni-modal radiology data in testing. Concretely, a Memory-aware Hetero-modal Distillation Network (MHD-Net) is proposed, which can distill well-learned multi-modal knowledge with the assistance of memory from the teacher to the student. In the teacher, to tackle the challenge in hetero-modal feature fusion, we propose a novel spatial-differentiated hetero-modal fusion module (SHFM) that models spatial-specific tumor information correlations across modalities. As only radiology data is accessible to the student, we store pathology features in the proposed contrast-boosted typing memory module (CTMM) that achieves type-wise memory updating and stage-wise contrastive memory boosting to ensure the effectiveness and generalization of memory items. In the student, to improve the cross-modal distillation, we propose a multi-stage memory-aware distillation (MMD) scheme that reads memory-aware pathology features from CTMM to remedy missing modal-specific information. Furthermore, we construct a Radiology-Pathology Thymic Epithelial Tumor (RPTET) dataset containing paired CT and WSI images with annotations. Experiments on the RPTET and CPTAC-LUAD datasets demonstrate that MHD-Net significantly improves tumor typing and outperforms existing multi-modal methods on missing modality situations.

摘要

融合多模态放射学和病理学数据,并结合互补信息,可以提高肿瘤分型的准确性。然而,由于获取病理学数据成本高,且有时只能在手术后获得,这限制了多模态方法在诊断中的应用。为了解决这个问题,我们在训练中全面学习多模态放射学-病理学数据,而在测试中仅使用单模态放射学数据。具体来说,我们提出了一种基于记忆的异模态蒸馏网络(MHD-Net),它可以在教师的帮助下,利用记忆从多模态知识中提取有用的信息。在教师端,为了解决异模态特征融合的挑战,我们提出了一种新颖的空间差异化异模态融合模块(SHFM),该模块可以对跨模态的肿瘤空间特定信息相关性进行建模。由于学生只能访问放射学数据,因此我们在提出的对比度增强分型记忆模块(CTMM)中存储病理学特征,该模块实现了基于类型的记忆更新和基于阶段的对比记忆增强,以确保记忆项的有效性和泛化性。在学生端,为了提高跨模态蒸馏的效果,我们提出了一种多阶段的基于记忆的蒸馏(MMD)方案,该方案从 CTMM 中读取基于记忆的病理学特征,以弥补模态特定信息的缺失。此外,我们构建了一个包含配对 CT 和 WSI 图像以及注释的放射学-病理学胸腺瘤(RPTET)数据集。在 RPTET 和 CPTAC-LUAD 数据集上的实验表明,MHD-Net 显著提高了肿瘤分型的准确性,并且在模态缺失情况下优于现有的多模态方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验