Suppr超能文献

MS-CLAM:全切片图像中肿瘤分类与定位的混合监督方法

MS-CLAM: Mixed supervision for the classification and localization of tumors in Whole Slide Images.

作者信息

Tourniaire Paul, Ilie Marius, Hofman Paul, Ayache Nicholas, Delingette Hervé

机构信息

Université Côte d'Azur, Inria, Epione project-team, Sophia Antipolis, France.

Laboratory of Clinical and Experimental Pathology, Pasteur Hospital, Université Côte d'Azur, Nice, France; Hospital-Related Biobank BB-0033-00025, France; FHU OncoAge, France.

出版信息

Med Image Anal. 2023 Apr;85:102763. doi: 10.1016/j.media.2023.102763. Epub 2023 Feb 6.

Abstract

Given the size of digitized Whole Slide Images (WSIs), it is generally laborious and time-consuming for pathologists to exhaustively delineate objects within them, especially with datasets containing hundreds of slides to annotate. Most of the time, only slide-level labels are available, giving rise to the development of weakly-supervised models. However, it is often difficult to obtain from such models accurate object localization, e.g., patches with tumor cells in a tumor detection task, as they are mainly designed for slide-level classification. Using the attention-based deep Multiple Instance Learning (MIL) model as our base weakly-supervised model, we propose to use mixed supervision - i.e., the use of both slide-level and patch-level labels - to improve both the classification and the localization performances of the original model, using only a limited amount of patch-level labeled slides. In addition, we propose an attention loss term to regularize the attention between key instances, and a paired batch method to create balanced batches for the model. First, we show that the changes made to the model already improve its performance and interpretability in the weakly-supervised setting. Furthermore, when using only between 12 and 62% of the total available patch-level annotations, we can reach performance close to fully-supervised models on the tumor classification datasets DigestPath2019 and Camelyon16.

摘要

考虑到数字化全切片图像(WSIs)的规模,病理学家要详尽地勾勒其中的物体通常既费力又耗时,尤其是对于包含数百张切片需要标注的数据集而言。大多数时候,只有切片级别的标签可用,这促使了弱监督模型的发展。然而,从这类模型中往往很难获得准确的物体定位,例如在肿瘤检测任务中含有肿瘤细胞的图像块,因为它们主要是为切片级分类而设计的。以基于注意力的深度多实例学习(MIL)模型作为我们的基础弱监督模型,我们提议使用混合监督——即同时使用切片级别和图像块级别的标签——仅使用有限数量的图像块级别标注切片来提高原始模型的分类和定位性能。此外,我们提出一个注意力损失项来规范关键实例之间的注意力,并提出一种配对批次方法为模型创建平衡批次。首先,我们表明对模型所做的更改已经在弱监督设置中提高了其性能和可解释性。此外,当仅使用全部可用图像块级别注释的12%至62%时,我们在肿瘤分类数据集DigestPath2019和Camelyon16上可以达到接近全监督模型的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验