Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
Med Image Anal. 2023 Oct;89:102933. doi: 10.1016/j.media.2023.102933. Epub 2023 Aug 14.
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.
细胞核分割是数字病理学中全幻灯片图像分析的关键任务。通常,完全监督学习的分割性能严重依赖于注释数据的数量和质量。然而,专业病理学家提供准确的像素级地面实况既耗时又昂贵,而获得粗略的标签(如点注释)则容易得多。在本文中,我们提出了一种仅需要点注释进行训练的细胞核分割弱监督学习方法。首先,基于 Voronoi 图和 k-means 聚类方法,从点注释中得出粗略的像素级标签,以避免过拟合。其次,设计了一种具有指数移动平均方法的协同训练策略,以细化粗糙标签的不完整监督。第三,针对病理图像的细胞核分割,专门设计了一种自监督的视觉表示学习方法,将苏木精成分图像转换为 H&E 染色图像,以更好地理解细胞核和细胞质之间的关系。我们使用两个公共数据集对所提出的方法进行了全面评估。视觉和定量结果均表明,与最先进的方法相比,我们的方法具有优越性,与完全监督的方法相比,具有竞争力。代码可在 https://github.com/hust-linyi/SC-Net 上获得。