Deng Ruining, Li Yanwei, Li Peize, Wang Jiacheng, Remedios Lucas W, Agzamkhodjaev Saydolimkhon, Asad Zuhayr, Liu Quan, Cui Can, Wang Yaohong, Wang Yihan, Tang Yucheng, Yang Haichun, Huo Yuankai
Vanderbilt University, Nashville TN 37215, USA.
Vanderbilt University Medical Center, Nashville TN 37232, USA.
Med Image Comput Comput Assist Interv. 2023 Oct;14225:497-507. doi: 10.1007/978-3-031-43987-2_48. Epub 2023 Oct 1.
Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.
在高分辨率的千兆像素全切片图像(WSI)中进行多类细胞分割对各种临床应用至关重要。训练这样一个人工智能模型通常需要经验丰富的领域专家(如病理学家)进行劳动密集型的逐像素手动标注。此外,通过肉眼区分细粒度的细胞类型(如足细胞和系膜细胞)时,这种标注容易出错。在本研究中,我们评估了仅使用外行标注者(没有医学领域知识的标注者)实现病理人工智能部署民主化的可行性。本文的贡献有三个方面:(1)我们提出了一种分子赋能的学习方案,用于使用外行标注者的部分标签进行多类细胞分割;(2)所提出的方法集成了千兆像素级分子形态学跨模态配准、分子信息标注和分子导向分割模型,从而通过3个外行标注者实现了比2个经验丰富的病理学家显著更优的性能;(3)提出了一种深度校正学习(使用不完美标签进行学习)方法,以使用部分标注的噪声数据进一步提高分割性能。从实验结果来看,我们的学习方法使用外行标注者的分子信息标注实现了F1 = 0.8496,优于经验丰富的病理学家基于传统形态学的标注(F1 = 0.7015)。我们的方法将病理分割深度模型的开发民主化到外行标注者级别,从而使学习过程得以扩展,类似于非医学计算机视觉任务。官方实现和细胞标注可在https://github.com/hrlblab/MolecularEL上公开获取。