Suppr超能文献

CroMAM:一种用于使用组织学图像预测胶质瘤基因状态和生存情况的交叉放大注意力特征融合模型。

CroMAM: A Cross-Magnification Attention Feature Fusion Model for Predicting Genetic Status and Survival of Gliomas Using Histological Images.

作者信息

Guo Jisen, Xu Peng, Wu Yuankui, Tao Yunyun, Han Chu, Lin Jiatai, Zhao Ke, Liu Zaiyi, Liu Wenbin, Lu Cheng

出版信息

IEEE J Biomed Health Inform. 2024 Dec;28(12):7345-7356. doi: 10.1109/JBHI.2024.3431471. Epub 2024 Dec 5.

Abstract

Predicting the gene mutation status in whole slide images (WSIs) is crucial for the clinical treatment, cancer management, and research of gliomas. With advancements in CNN and Transformer algorithms, several promising models have been proposed. However, existing studies have paid little attention on fusing multi-magnification information, and the model requires processing all patches from a whole slide image. In this paper, we propose a cross-magnification attention model called CroMAM for predicting the genetic status and survival of gliomas. The CroMAM first utilizes a systematic patch extraction module to sample a subset of representative patches for downstream analysis. Next, the CroMAM applies Swin Transformer to extract local and global features from patches at different magnifications, followed by acquiring high-level features and dependencies among single-magnification patches through the application of a Vision Transformer. Subsequently, the CroMAM exchanges the integrated feature representations of different magnifications and encourage the integrated feature representations to learn the discriminative information from other magnification. Additionally, we design a cross-magnification attention analysis method to examine the effect of cross-magnification attention quantitatively and qualitatively which increases the model's explainability. To validate the performance of the model, we compare the proposed model with other multi-magnification feature fusion models on three tasks in two datasets. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in predicting the genetic status and survival of gliomas.

摘要

预测全切片图像(WSIs)中的基因突变状态对于神经胶质瘤的临床治疗、癌症管理和研究至关重要。随着卷积神经网络(CNN)和Transformer算法的发展,已经提出了几种有前景的模型。然而,现有研究很少关注融合多倍率信息,并且模型需要处理来自全切片图像的所有图像块。在本文中,我们提出了一种名为CroMAM的跨倍率注意力模型,用于预测神经胶质瘤的基因状态和生存期。CroMAM首先利用一个系统的图像块提取模块来采样代表性图像块的子集,用于下游分析。接下来,CroMAM应用Swin Transformer从不同倍率的图像块中提取局部和全局特征,随后通过应用视觉Transformer获取单倍率图像块之间的高级特征和依赖性。随后,CroMAM交换不同倍率的集成特征表示,并鼓励集成特征表示从其他倍率中学习判别信息。此外,我们设计了一种跨倍率注意力分析方法,从定量和定性两方面检验跨倍率注意力的效果,这增加了模型的可解释性。为了验证模型的性能,我们在两个数据集的三个任务上,将所提出的模型与其他多倍率特征融合模型进行了比较。大量实验表明,所提出的模型在预测神经胶质瘤的基因状态和生存期方面达到了当前最优性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验