Suppr超能文献

用于从多模态医学图像进行生存预测的自适应分割到生存学习

Adaptive segmentation-to-survival learning for survival prediction from multi-modality medical images.

作者信息

Meng Mingyuan, Gu Bingxin, Fulham Michael, Song Shaoli, Feng Dagan, Bi Lei, Kim Jinman

机构信息

School of Computer Science, The University of Sydney, Sydney, Australia.

Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, China.

出版信息

NPJ Precis Oncol. 2024 Oct 14;8(1):232. doi: 10.1038/s41698-024-00690-y.

Abstract

Early survival prediction is vital for the clinical management of cancer patients, as tumors can be better controlled with personalized treatment planning. Traditional survival prediction methods are based on radiomics feature engineering and/or clinical indicators (e.g., cancer staging). Recently, survival prediction models with advances in deep learning techniques have achieved state-of-the-art performance in end-to-end survival prediction by exploiting deep features derived from medical images. However, existing models are heavily reliant on the prognostic information within primary tumors and cannot effectively leverage out-of-tumor prognostic information characterizing local tumor metastasis and adjacent tissue invasion. Also, existing models are sub-optimal in leveraging multi-modality medical images as they rely on empirically designed fusion strategies to integrate multi-modality information, where the fusion strategies are pre-defined based on domain-specific human prior knowledge and inherently limited in adaptability. Here, we present an Adaptive Multi-modality Segmentation-to-Survival model (AdaMSS) for survival prediction from multi-modality medical images. The AdaMSS can self-adapt its fusion strategy based on training data and also can adapt its focus regions to capture the prognostic information outside the primary tumors. Extensive experiments with two large cancer datasets (1380 patients from nine medical centers) show that our AdaMSS surmounts the state-of-the-art survival prediction performance (C-index: 0.804 and 0.757), demonstrating the potential to facilitate personalized treatment planning.

摘要

早期生存预测对于癌症患者的临床管理至关重要,因为通过个性化治疗方案可以更好地控制肿瘤。传统的生存预测方法基于放射组学特征工程和/或临床指标(如癌症分期)。最近,随着深度学习技术的发展,生存预测模型通过利用从医学图像中提取的深度特征,在端到端生存预测中取得了领先的性能。然而,现有模型严重依赖原发性肿瘤内的预后信息,无法有效利用表征局部肿瘤转移和邻近组织侵犯的肿瘤外预后信息。此外,现有模型在利用多模态医学图像方面存在不足,因为它们依赖经验设计的融合策略来整合多模态信息,而这些融合策略是基于特定领域的人类先验知识预先定义的,适应性固有地有限。在此,我们提出了一种用于从多模态医学图像进行生存预测的自适应多模态分割到生存模型(AdaMSS)。AdaMSS可以根据训练数据自适应其融合策略,还可以调整其关注区域以捕获原发性肿瘤外的预后信息。对两个大型癌症数据集(来自九个医疗中心的1380名患者)进行的广泛实验表明,我们的AdaMSS超越了当前的生存预测性能(C指数:0.804和0.757),证明了其促进个性化治疗方案制定的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff16/11473954/dcc09a3407cb/41698_2024_690_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验