Suppr超能文献

基于自监督学习和切片一致性的全切片图像癌症生存预测。

Cancer Survival Prediction From Whole Slide Images With Self-Supervised Learning and Slide Consistency.

出版信息

IEEE Trans Med Imaging. 2023 May;42(5):1401-1412. doi: 10.1109/TMI.2022.3228275. Epub 2023 May 2.

Abstract

Histopathological Whole Slide Images (WSIs) at giga-pixel resolution are the gold standard for cancer analysis and prognosis. Due to the scarcity of pixel- or patch-level annotations of WSIs, many existing methods attempt to predict survival outcomes based on a three-stage strategy that includes patch selection, patch-level feature extraction and aggregation. However, the patch features are usually extracted by using truncated models (e.g. ResNet) pretrained on ImageNet without fine-tuning on WSI tasks, and the aggregation stage does not consider the many-to-one relationship between multiple WSIs and the patient. In this paper, we propose a novel survival prediction framework that consists of patch sampling, feature extraction and patient-level survival prediction. Specifically, we employ two kinds of self-supervised learning methods, i.e. colorization and cross-channel, as pretext tasks to train convnet-based models that are tailored for extracting features from WSIs. Then, at the patient-level survival prediction we explicitly aggregate features from multiple WSIs, using consistency and contrastive losses to normalize slide-level features at the patient level. We conduct extensive experiments on three large-scale datasets: TCGA-GBM, TCGA-LUSC and NLST. Experimental results demonstrate the effectiveness of our proposed framework, as it achieves state-of-the-art performance in comparison with previous studies, with concordance index of 0.670, 0.679 and 0.711 on TCGA-GBM, TCGA-LUSC and NLST, respectively.

摘要

千兆像素分辨率的组织病理全切片图像(WSI)是癌症分析和预后的金标准。由于 WSI 的像素级或补丁级注释稀缺,许多现有方法试图基于包括补丁选择、补丁级特征提取和聚合的三阶段策略来预测生存结果。然而,补丁特征通常是使用在 ImageNet 上预训练的截断模型(例如 ResNet)提取的,而没有在 WSI 任务上进行微调,并且聚合阶段没有考虑到多个 WSI 与患者之间的多对一关系。在本文中,我们提出了一种新的生存预测框架,该框架由补丁采样、特征提取和患者水平的生存预测组成。具体来说,我们采用两种自监督学习方法,即颜色化和交叉通道,作为预备任务来训练基于 convnet 的模型,这些模型专门用于从 WSI 中提取特征。然后,在患者水平的生存预测中,我们明确地从多个 WSI 中聚合特征,使用一致性和对比损失来在患者水平上规范化幻灯片级特征。我们在三个大规模数据集上进行了广泛的实验:TCGA-GBM、TCGA-LUSC 和 NLST。实验结果表明,我们提出的框架是有效的,与以前的研究相比,它在性能上达到了最先进的水平,在 TCGA-GBM、TCGA-LUSC 和 NLST 上的一致性指数分别为 0.670、0.679 和 0.711。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验