Suppr超能文献

用于癌症生存预测的多模态多实例证据融合神经网络

Multimodal multi-instance evidence fusion neural networks for cancer survival prediction.

作者信息

Luo Hui, Huang Jiashuang, Ju Hengrong, Zhou Tianyi, Ding Weiping

机构信息

Faculty of Data Science, City University of Macau, Macau, 999078, China.

School of Information and Management, Guangxi Medical University, Nanning, 530021, China.

出版信息

Sci Rep. 2025 Mar 26;15(1):10470. doi: 10.1038/s41598-025-93770-3.

Abstract

Accurate cancer survival prediction plays a crucial role in assisting clinicians in formulating treatment plans. Multimodal data, such as histopathological images, genomic data, and clinical information, provide complementary and comprehensive information, significantly enhancing the accuracy of this task. However, existing methods, despite achieving some promising results, still exhibit two significant limitations: they fail to effectively utilize global context and overlook the uncertainty of different modalities, which may lead to unreliable predictions. In this study, we propose a multimodal multi-instance evidence fusion neural network for cancer survival prediction, called M2EF-NNs. Specifically, to better capture global information from images, we employ a pre-trained vision transformer model to extract patch feature embeddings from histopathological images. Additionally, we are the first to apply the Dempster-Shafer evidence theory to the cancer survival prediction task and introduce subjective logic to estimate the uncertainty of different modalities. We then dynamically adjust the weights of the class probability distribution after multimodal fusion based on the estimated evidence from the fused multimodal data to achieve trusted survival prediction. Finally, the experimental results on three cancer datasets demonstrate that our method significantly improves cancer survival prediction regarding overall C-index and AUC, thereby validating the model's reliability.

摘要

准确的癌症生存预测在协助临床医生制定治疗方案方面起着至关重要的作用。多模态数据,如组织病理学图像、基因组数据和临床信息,提供了互补且全面的信息,显著提高了这项任务的准确性。然而,现有方法尽管取得了一些有前景的结果,但仍存在两个显著局限性:它们未能有效利用全局上下文,并且忽略了不同模态的不确定性,这可能导致不可靠的预测。在本研究中,我们提出了一种用于癌症生存预测的多模态多实例证据融合神经网络,称为M2EF-NNs。具体而言,为了更好地从图像中捕捉全局信息,我们使用预训练的视觉Transformer模型从组织病理学图像中提取补丁特征嵌入。此外,我们首次将Dempster-Shafer证据理论应用于癌症生存预测任务,并引入主观逻辑来估计不同模态的不确定性。然后,我们根据融合后的多模态数据估计的证据,在多模态融合后动态调整类概率分布的权重,以实现可靠的生存预测。最后,在三个癌症数据集上的实验结果表明,我们的方法在总体C指数和AUC方面显著提高了癌症生存预测,从而验证了模型的可靠性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da9d/11947308/557707c08151/41598_2025_93770_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验