Li Xueyuan, Cui Can, Deng Ruining, Tang Yucheng, Liu Quan, Yao Tianyuan, Bao Shunxing, Chowdhury Naweed, Yang Haichun, Huo Yuankai
Vanderbilt University, Data Science Institute, Nashville, Tennessee, United States.
Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States.
J Med Imaging (Bellingham). 2025 Sep;12(5):057501. doi: 10.1117/1.JMI.12.5.057501. Epub 2025 Sep 4.
Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.
In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.
Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.
Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.
视觉基础模型(VFM)的进展,特别是分割一切模型(SAM)推动了计算病理学的最新发展。该模型通过两种主要方法促进细胞核分割:基于提示的零样本分割以及使用细胞特异性SAM模型进行直接分割。这些方法能够在一系列细胞核和细胞上实现有效的分割。然而,通用的VFM在细粒度语义分割方面常常面临挑战,例如识别特定的细胞核亚型或特定细胞。
在本文中,我们提出了分子赋能全SAM模型,以通过利用VFM的能力推进计算病理学。该模型采用了一种全栈方法,重点在于:(1)注释——通过分子赋能学习让外行注释者参与,以减少对详细像素级注释的需求;(2)学习——使SAM模型适应以强调特定语义,利用SAM适配器的强大通用性;(3)优化——通过整合分子导向的校正学习提高分割精度。
来自内部和公共数据集的实验结果表明,即使面对不同的注释质量,全SAM模型也能显著提高细胞分类性能。
我们的方法不仅减少了注释者的工作量,还将精确生物医学图像分析的可及性扩展到资源有限的环境,从而推进医学诊断并使病理学图像分析自动化。