Suppr超能文献

用于脑胶质瘤生存风险预测的多视图稀疏注意力网络

Multi-view sparse attention network for glioma survival risk prediction.

作者信息

Li Xinyu, Kuang Hulin, Cheng Jianhong, Luo Yi, He Mengshen, Wang Jianxin

机构信息

School of Computer Science and Engineering, Central South University, Changsha, China.

Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang, China.

出版信息

Med Phys. 2025 Jun;52(6):4416-4428. doi: 10.1002/mp.17774. Epub 2025 Mar 25.

Abstract

BACKGROUND

Predicting the survival risk of gliomas is vital for personalized treatment plans. The latest survival risk prediction methods primarily rely on histopathology and genomics, which are invasive and costly. However, predicting survival risk based on non-invasive Magnetic Resonance Imaging (MRI) or handcrafted radiomics (HCRs) and clinical features has remained a challenge. The fusion of multi-view, non-invasive information holds the potential to improve risk prediction. Additionally, existing survival risk prediction methods typically depend on the Cox partial log-likelihood loss as their main optimization criterion, which may overlook the survival rankings among gliomas, leading to discrepancies between risk prediction and actual outcomes.

PURPOSE

This study aims to propose a non-invasive multi-view survival risk prediction network for gliomas to meet the clinical demand for efficient prognosis.

METHODS

This paper proposes a multi-view survival risk prediction network, which uses multi-view data as input, including 3D multi-modal MRIs, 2D images projected from MRIs, 1D HCRs features based on MRIs, and clinical information. In the feature encoder for each view, we design Pooling and Sparse Attention-based Transformer to extract risk-related features. We propose a Multi-View Complementary Attention Fusion module based on local and global attention to capture complementary features between different views and train a Cox model for survival risk prediction. We design a similarity loss based on cosine similarity to ensure the uniqueness of the extracted features between different views and design a pairwise ranking loss to address the Cox model's difficulty in discerning survival differences.

RESULTS

The experimental results demonstrate that our method performs well in glioma survival risk prediction. It achieves a C-index of 75.35% and 74.47% on the publicly available UCSF-PDGM and BraTS2020 datasets, surpassing other single-view and multi-view methods. Additionally, the proposed method has the lowest number of trainable parameters compared to other MRI-based methods, with only 29.07 million, achieving a trade-off between performance and parameter efficiency.

CONCLUSIONS

The proposed method highlights that we effectively fuse multi-view non-invasive information, offering advantages in survival risk prediction and advancing the research on glioma prognosis.

摘要

背景

预测胶质瘤的生存风险对于个性化治疗方案至关重要。最新的生存风险预测方法主要依赖于组织病理学和基因组学,这些方法具有侵入性且成本高昂。然而,基于非侵入性磁共振成像(MRI)或手工制作的放射组学(HCR)以及临床特征来预测生存风险仍然是一项挑战。多视角、非侵入性信息的融合具有改善风险预测的潜力。此外,现有的生存风险预测方法通常依赖于Cox偏对数似然损失作为其主要优化标准,这可能会忽略胶质瘤之间的生存排名,导致风险预测与实际结果之间存在差异。

目的

本研究旨在提出一种用于胶质瘤的非侵入性多视角生存风险预测网络,以满足高效预后的临床需求。

方法

本文提出了一种多视角生存风险预测网络,该网络将多视角数据作为输入,包括3D多模态MRI、从MRI投影的2D图像、基于MRI的1D HCR特征以及临床信息。在每个视角的特征编码器中,我们设计了基于池化和稀疏注意力的Transformer来提取与风险相关的特征。我们提出了一种基于局部和全局注意力的多视角互补注意力融合模块,以捕获不同视角之间的互补特征,并训练一个Cox模型用于生存风险预测。我们基于余弦相似度设计了一种相似性损失,以确保不同视角之间提取特征的唯一性,并设计了一种成对排序损失来解决Cox模型在辨别生存差异方面的困难。

结果

实验结果表明,我们的方法在胶质瘤生存风险预测中表现良好。在公开可用的UCSF-PDGM和BraTS2020数据集上,它分别实现了75.35%和74.47%的C指数,超过了其他单视角和多视角方法。此外,与其他基于MRI的方法相比,所提出的方法具有最少的可训练参数,仅为2907万,在性能和参数效率之间实现了权衡。

结论

所提出的方法突出表明,我们有效地融合了多视角非侵入性信息,在生存风险预测方面具有优势,并推动了胶质瘤预后研究的发展。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验