• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CroMAM:一种用于使用组织学图像预测胶质瘤基因状态和生存情况的交叉放大注意力特征融合模型。

CroMAM: A Cross-Magnification Attention Feature Fusion Model for Predicting Genetic Status and Survival of Gliomas Using Histological Images.

作者信息

Guo Jisen, Xu Peng, Wu Yuankui, Tao Yunyun, Han Chu, Lin Jiatai, Zhao Ke, Liu Zaiyi, Liu Wenbin, Lu Cheng

出版信息

IEEE J Biomed Health Inform. 2024 Dec;28(12):7345-7356. doi: 10.1109/JBHI.2024.3431471. Epub 2024 Dec 5.

DOI:10.1109/JBHI.2024.3431471
PMID:39028591
Abstract

Predicting the gene mutation status in whole slide images (WSIs) is crucial for the clinical treatment, cancer management, and research of gliomas. With advancements in CNN and Transformer algorithms, several promising models have been proposed. However, existing studies have paid little attention on fusing multi-magnification information, and the model requires processing all patches from a whole slide image. In this paper, we propose a cross-magnification attention model called CroMAM for predicting the genetic status and survival of gliomas. The CroMAM first utilizes a systematic patch extraction module to sample a subset of representative patches for downstream analysis. Next, the CroMAM applies Swin Transformer to extract local and global features from patches at different magnifications, followed by acquiring high-level features and dependencies among single-magnification patches through the application of a Vision Transformer. Subsequently, the CroMAM exchanges the integrated feature representations of different magnifications and encourage the integrated feature representations to learn the discriminative information from other magnification. Additionally, we design a cross-magnification attention analysis method to examine the effect of cross-magnification attention quantitatively and qualitatively which increases the model's explainability. To validate the performance of the model, we compare the proposed model with other multi-magnification feature fusion models on three tasks in two datasets. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in predicting the genetic status and survival of gliomas.

摘要

预测全切片图像(WSIs)中的基因突变状态对于神经胶质瘤的临床治疗、癌症管理和研究至关重要。随着卷积神经网络(CNN)和Transformer算法的发展,已经提出了几种有前景的模型。然而,现有研究很少关注融合多倍率信息,并且模型需要处理来自全切片图像的所有图像块。在本文中,我们提出了一种名为CroMAM的跨倍率注意力模型,用于预测神经胶质瘤的基因状态和生存期。CroMAM首先利用一个系统的图像块提取模块来采样代表性图像块的子集,用于下游分析。接下来,CroMAM应用Swin Transformer从不同倍率的图像块中提取局部和全局特征,随后通过应用视觉Transformer获取单倍率图像块之间的高级特征和依赖性。随后,CroMAM交换不同倍率的集成特征表示,并鼓励集成特征表示从其他倍率中学习判别信息。此外,我们设计了一种跨倍率注意力分析方法,从定量和定性两方面检验跨倍率注意力的效果,这增加了模型的可解释性。为了验证模型的性能,我们在两个数据集的三个任务上,将所提出的模型与其他多倍率特征融合模型进行了比较。大量实验表明,所提出的模型在预测神经胶质瘤的基因状态和生存期方面达到了当前最优性能。

相似文献

1
CroMAM: A Cross-Magnification Attention Feature Fusion Model for Predicting Genetic Status and Survival of Gliomas Using Histological Images.CroMAM:一种用于使用组织学图像预测胶质瘤基因状态和生存情况的交叉放大注意力特征融合模型。
IEEE J Biomed Health Inform. 2024 Dec;28(12):7345-7356. doi: 10.1109/JBHI.2024.3431471. Epub 2024 Dec 5.
2
Dual-path neural network extracts tumor microenvironment information from whole slide images to predict molecular typing and prognosis of Glioma.双路径神经网络从全切片图像中提取肿瘤微环境信息,以预测胶质瘤的分子分型和预后。
Comput Methods Programs Biomed. 2025 Apr;261:108580. doi: 10.1016/j.cmpb.2024.108580. Epub 2025 Jan 4.
3
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.
4
Positional encoding-guided transformer-based multiple instance learning for histopathology whole slide images classification.基于位置编码引导的基于Transformer的多实例学习用于组织病理学全切片图像分类。
Comput Methods Programs Biomed. 2025 Jan;258:108491. doi: 10.1016/j.cmpb.2024.108491. Epub 2024 Nov 9.
5
Computer-aided diagnosis system for grading brain tumor using histopathology images based on color and texture features.基于颜色和纹理特征的脑肿瘤组织病理学图像计算机辅助诊断系统。
BMC Med Imaging. 2024 Jul 19;24(1):177. doi: 10.1186/s12880-024-01355-9.
6
MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification.MG-Trans:具有信息瓶颈的用于全幻灯片图像分类的多尺度图Transformer。
IEEE Trans Med Imaging. 2023 Dec;42(12):3871-3883. doi: 10.1109/TMI.2023.3313252. Epub 2023 Nov 30.
7
HST-MRF: Heterogeneous Swin Transformer With Multi-Receptive Field for Medical Image Segmentation.HST-MRF:用于医学图像分割的具有多感受野的异构 Swin 变换器。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4048-4061. doi: 10.1109/JBHI.2024.3397047. Epub 2024 Jul 2.
8
Hagnifinder: Recovering magnification information of digital histological images using deep learning.Hagnifinder:利用深度学习恢复数字组织学图像的放大倍数信息
J Pathol Inform. 2023 Feb 16;14:100302. doi: 10.1016/j.jpi.2023.100302. eCollection 2023.
9
MedFuseNet: fusing local and global deep feature representations with hybrid attention mechanisms for medical image segmentation.MedFuseNet:使用混合注意力机制融合局部和全局深度特征表示以进行医学图像分割
Sci Rep. 2025 Feb 11;15(1):5093. doi: 10.1038/s41598-025-89096-9.
10
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.