Suppr超能文献

LESS:用于细胞学全玻片图像筛选的标签高效多尺度学习

LESS: Label-efficient multi-scale learning for cytological whole slide image screening.

作者信息

Zhao Beidi, Deng Wenlong, Li Zi Han Henry, Zhou Chen, Gao Zuhua, Wang Gang, Li Xiaoxiao

机构信息

Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada; Vector Institute, Toronto, ON M5G 1M1, Canada.

Department of Pathology, BC Cancer Agency, Vancouver, BC V5Z 4E6, Canada.

出版信息

Med Image Anal. 2024 May;94:103109. doi: 10.1016/j.media.2024.103109. Epub 2024 Feb 20.

Abstract

In computational pathology, multiple instance learning (MIL) is widely used to circumvent the computational impasse in giga-pixel whole slide image (WSI) analysis. It usually consists of two stages: patch-level feature extraction and slide-level aggregation. Recently, pretrained models or self-supervised learning have been used to extract patch features, but they suffer from low effectiveness or inefficiency due to overlooking the task-specific supervision provided by slide labels. Here we propose a weakly-supervised Label-Efficient WSI Screening method, dubbed LESS, for cytological WSI analysis with only slide-level labels, which can be effectively applied to small datasets. First, we suggest using variational positive-unlabeled (VPU) learning to uncover hidden labels of both benign and malignant patches. We provide appropriate supervision by using slide-level labels to improve the learning of patch-level features. Next, we take into account the sparse and random arrangement of cells in cytological WSIs. To address this, we propose a strategy to crop patches at multiple scales and utilize a cross-attention vision transformer (CrossViT) to combine information from different scales for WSI classification. The combination of our two steps achieves task-alignment, improving effectiveness and efficiency. We validate the proposed label-efficient method on a urine cytology WSI dataset encompassing 130 samples (13,000 patches) and a breast cytology dataset FNAC 2019 with 212 samples (21,200 patches). The experiment shows that the proposed LESS reaches 84.79%, 85.43%, 91.79% and 78.30% on the urine cytology WSI dataset, and 96.88%, 96.86%, 98.95%, 97.06% on the breast cytology high-resolution-image dataset in terms of accuracy, AUC, sensitivity and specificity. It outperforms state-of-the-art MIL methods on pathology WSIs and realizes automatic cytological WSI cancer screening.

摘要

在计算病理学中,多实例学习(MIL)被广泛用于克服千兆像素全切片图像(WSI)分析中的计算难题。它通常包括两个阶段:补丁级特征提取和切片级聚合。最近,预训练模型或自监督学习已被用于提取补丁特征,但由于忽略了切片标签提供的特定任务监督,它们存在有效性低或效率低的问题。在这里,我们提出了一种弱监督的标签高效WSI筛选方法,称为LESS,用于仅具有切片级标签的细胞学WSI分析,该方法可以有效地应用于小数据集。首先,我们建议使用变分正无标签(VPU)学习来揭示良性和恶性补丁的隐藏标签。我们通过使用切片级标签提供适当的监督,以改进补丁级特征的学习。接下来,我们考虑到细胞学WSI中细胞的稀疏和随机排列。为了解决这个问题,我们提出了一种在多个尺度上裁剪补丁的策略,并利用交叉注意力视觉Transformer(CrossViT)来组合来自不同尺度的信息进行WSI分类。我们这两个步骤的结合实现了任务对齐,提高了有效性和效率。我们在一个包含130个样本(13000个补丁)的尿液细胞学WSI数据集和一个包含212个样本(21200个补丁)的乳腺细胞学数据集FNAC 2019上验证了所提出的标签高效方法。实验表明,所提出的LESS在尿液细胞学WSI数据集上的准确率、AUC、灵敏度和特异性分别达到84.79%、85.43%、91.79%和78.30%,在乳腺细胞学高分辨率图像数据集上分别达到96.88%、96.86%、98.95%、97.06%。它在病理学WSI上优于现有最先进的MIL方法,并实现了自动细胞学WSI癌症筛查。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验