Lyu Chaoyi, Zhao Lu, Xie Yuan, Zhao Wangyuan, Zhou Yufu, Ting Hua Nong, Zhang Puming, Zhao Jun
The School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China.
The School of Biomedical Engineering, University Malaya, Kuala Lumpur, Malaysia.
Biomed Phys Eng Express. 2026 Feb 3;12(1). doi: 10.1088/2057-1976/ae3b46.
The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.
基于深度学习的计算病理学和基因组学的快速发展已证明,有效整合全切片图像(WSIs)和基因组数据用于癌症生存预测具有巨大潜力。然而,病理特征和基因组特征之间存在的显著异质性使得探索复杂的跨模态关系并构建全面的患者表征具有挑战性。为了解决这一问题,我们提出了基于信息压缩的多模态置信度引导融合网络(iMCN)。该框架围绕两个关键模块构建。首先,自适应病理学信息压缩(APIC)模块采用可学习的信息中心对图像区域进行动态聚类,在保留与生存相关的判别模式的同时去除冗余信息。其次,置信度引导多模态融合(CMF)模块利用一个学习到的子网络来估计每种模态表征的置信度,从而实现动态加权融合,在每种情况下优先考虑最可靠的特征。在TCGA-LUAD和TCGA-BRCA队列上进行评估时,iMCN分别实现了0.691和0.740的平均一致性指数(C指数)值,比现有最先进方法绝对提高了1.65%,表现更优。定性地说,该模型生成了可解释的热图,定位了特定形态结构(如肿瘤细胞巢)和功能基因组通路(如肿瘤发生)之间的高关联区域,为基因组-病理联系提供了生物学见解。总之,iMCN通过引入一个用于信息压缩和基于置信度融合的原则性框架,显著推进了多模态生存分析。此外,相关性分析表明,组织异质性对不同癌症类型的最佳保留率影响不同,异质性较高的肿瘤(如LUAD)从激进的信息压缩中获益更多。除了其预测性能外,该模型阐明组织形态与分子生物学之间相互作用的能力增强了其作为转化癌症研究工具的价值。