• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于脑胶质瘤生存风险预测的多视图稀疏注意力网络

Multi-view sparse attention network for glioma survival risk prediction.

作者信息

Li Xinyu, Kuang Hulin, Cheng Jianhong, Luo Yi, He Mengshen, Wang Jianxin

机构信息

School of Computer Science and Engineering, Central South University, Changsha, China.

Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang, China.

出版信息

Med Phys. 2025 Jun;52(6):4416-4428. doi: 10.1002/mp.17774. Epub 2025 Mar 25.

DOI:10.1002/mp.17774
PMID:40133764
Abstract

BACKGROUND

Predicting the survival risk of gliomas is vital for personalized treatment plans. The latest survival risk prediction methods primarily rely on histopathology and genomics, which are invasive and costly. However, predicting survival risk based on non-invasive Magnetic Resonance Imaging (MRI) or handcrafted radiomics (HCRs) and clinical features has remained a challenge. The fusion of multi-view, non-invasive information holds the potential to improve risk prediction. Additionally, existing survival risk prediction methods typically depend on the Cox partial log-likelihood loss as their main optimization criterion, which may overlook the survival rankings among gliomas, leading to discrepancies between risk prediction and actual outcomes.

PURPOSE

This study aims to propose a non-invasive multi-view survival risk prediction network for gliomas to meet the clinical demand for efficient prognosis.

METHODS

This paper proposes a multi-view survival risk prediction network, which uses multi-view data as input, including 3D multi-modal MRIs, 2D images projected from MRIs, 1D HCRs features based on MRIs, and clinical information. In the feature encoder for each view, we design Pooling and Sparse Attention-based Transformer to extract risk-related features. We propose a Multi-View Complementary Attention Fusion module based on local and global attention to capture complementary features between different views and train a Cox model for survival risk prediction. We design a similarity loss based on cosine similarity to ensure the uniqueness of the extracted features between different views and design a pairwise ranking loss to address the Cox model's difficulty in discerning survival differences.

RESULTS

The experimental results demonstrate that our method performs well in glioma survival risk prediction. It achieves a C-index of 75.35% and 74.47% on the publicly available UCSF-PDGM and BraTS2020 datasets, surpassing other single-view and multi-view methods. Additionally, the proposed method has the lowest number of trainable parameters compared to other MRI-based methods, with only 29.07 million, achieving a trade-off between performance and parameter efficiency.

CONCLUSIONS

The proposed method highlights that we effectively fuse multi-view non-invasive information, offering advantages in survival risk prediction and advancing the research on glioma prognosis.

摘要

背景

预测胶质瘤的生存风险对于个性化治疗方案至关重要。最新的生存风险预测方法主要依赖于组织病理学和基因组学,这些方法具有侵入性且成本高昂。然而,基于非侵入性磁共振成像(MRI)或手工制作的放射组学(HCR)以及临床特征来预测生存风险仍然是一项挑战。多视角、非侵入性信息的融合具有改善风险预测的潜力。此外,现有的生存风险预测方法通常依赖于Cox偏对数似然损失作为其主要优化标准,这可能会忽略胶质瘤之间的生存排名,导致风险预测与实际结果之间存在差异。

目的

本研究旨在提出一种用于胶质瘤的非侵入性多视角生存风险预测网络,以满足高效预后的临床需求。

方法

本文提出了一种多视角生存风险预测网络,该网络将多视角数据作为输入,包括3D多模态MRI、从MRI投影的2D图像、基于MRI的1D HCR特征以及临床信息。在每个视角的特征编码器中,我们设计了基于池化和稀疏注意力的Transformer来提取与风险相关的特征。我们提出了一种基于局部和全局注意力的多视角互补注意力融合模块,以捕获不同视角之间的互补特征,并训练一个Cox模型用于生存风险预测。我们基于余弦相似度设计了一种相似性损失,以确保不同视角之间提取特征的唯一性,并设计了一种成对排序损失来解决Cox模型在辨别生存差异方面的困难。

结果

实验结果表明,我们的方法在胶质瘤生存风险预测中表现良好。在公开可用的UCSF-PDGM和BraTS2020数据集上,它分别实现了75.35%和74.47%的C指数,超过了其他单视角和多视角方法。此外,与其他基于MRI的方法相比,所提出的方法具有最少的可训练参数,仅为2907万,在性能和参数效率之间实现了权衡。

结论

所提出的方法突出表明,我们有效地融合了多视角非侵入性信息,在生存风险预测方面具有优势,并推动了胶质瘤预后研究的发展。

相似文献

1
Multi-view sparse attention network for glioma survival risk prediction.用于脑胶质瘤生存风险预测的多视图稀疏注意力网络
Med Phys. 2025 Jun;52(6):4416-4428. doi: 10.1002/mp.17774. Epub 2025 Mar 25.
2
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.
3
Attention-guided multi-scale context aggregation network for multi-modal brain glioma segmentation.基于注意引导的多尺度上下文聚合网络的多模态脑胶质瘤分割。
Med Phys. 2023 Dec;50(12):7629-7640. doi: 10.1002/mp.16452. Epub 2023 May 7.
4
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.
5
Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages.基于多模态神经影像的多通道 3D 深度特征学习在脑肿瘤患者生存时间预测中的应用。
Sci Rep. 2019 Jan 31;9(1):1103. doi: 10.1038/s41598-018-37387-9.
6
ETUNet:Exploring efficient transformer enhanced UNet for 3D brain tumor segmentation.ETUNet:探索高效的基于Transformer 的增强型 UNet 进行 3D 脑肿瘤分割。
Comput Biol Med. 2024 Mar;171:108005. doi: 10.1016/j.compbiomed.2024.108005. Epub 2024 Jan 23.
7
Spatial adaptive and transformer fusion network (STFNet) for low-count PET blind denoising with MRI.基于 MRI 的低计数 PET 盲去噪的空间自适应和变换融合网络(STFNet)
Med Phys. 2022 Jan;49(1):343-356. doi: 10.1002/mp.15368. Epub 2021 Dec 10.
8
An automated cascade framework for glioma prognosis via segmentation, multi-feature fusion and classification techniques.一种通过分割、多特征融合和分类技术实现的脑胶质瘤预后自动级联框架。
Biomed Phys Eng Express. 2025 May 13;11(3). doi: 10.1088/2057-1976/add26c.
9
CroMAM: A Cross-Magnification Attention Feature Fusion Model for Predicting Genetic Status and Survival of Gliomas Using Histological Images.CroMAM:一种用于使用组织学图像预测胶质瘤基因状态和生存情况的交叉放大注意力特征融合模型。
IEEE J Biomed Health Inform. 2024 Dec;28(12):7345-7356. doi: 10.1109/JBHI.2024.3431471. Epub 2024 Dec 5.
10
A multi-slice attention fusion and multi-view personalized fusion lightweight network for Alzheimer's disease diagnosis.用于阿尔茨海默病诊断的多切片注意力融合和多视图个性化融合轻量化网络。
BMC Med Imaging. 2024 Sep 27;24(1):258. doi: 10.1186/s12880-024-01429-8.