Department of Radiology, Duke University, Durham, NC, 27708, USA; Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA; Department of Computer Science, Duke University, Durham, NC, 27708, USA; Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, 27708, USA.
Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA.
Med Image Anal. 2023 Oct;89:102918. doi: 10.1016/j.media.2023.102918. Epub 2023 Aug 2.
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
由于医学图像数据注释的有限可用性,培训医学图像分割模型仍然具有挑战性。Segment Anything Model(SAM)是一个在超过 10 亿个注释上进行训练的基础模型,主要用于自然图像,旨在以交互方式分割用户定义的感兴趣对象。虽然该模型在自然图像上的性能令人印象深刻,但医学图像领域也存在自己的一系列挑战。在这里,我们在来自不同模态和解剖结构的 19 个医学成像数据集上对 SAM 分割医学图像的能力进行了广泛评估。在我们的实验中,我们使用一种模拟交互式分割的标准方法为 SAM 生成点和框提示。我们报告了以下发现:(1)SAM 基于单个提示的性能高度依赖于数据集和任务,从脊柱 MRI 的 IoU=0.1135 到髋部 X 射线的 IoU=0.8650。(2)对于具有较少歧义的轮廓分明的对象(如计算机断层扫描中的器官分割),分割性能似乎更好,而在其他各种情况下(如脑肿瘤分割)则较差。(3)SAM 用框提示的性能明显优于用点提示的性能。(4)在几乎所有单点提示设置中,SAM 都优于类似的方法 RITM、SimpleClick 和 FocalClick。(5)当提供迭代多点提示时,SAM 的性能通常仅略有提高,而其他方法的性能则提高到超过 SAM 基于点的性能的水平。我们还提供了 SAM 在所有测试数据集上的性能、迭代分割以及提示歧义下 SAM 的行为的几个示例。我们的结论是,SAM 对某些医学成像数据集显示出令人印象深刻的零样本分割性能,但对其他数据集则表现中等或较差。SAM 有可能在医学成像中的自动医学图像分割中产生重大影响,但在使用时需要谨慎。评估 SAM 的代码可在 https://github.com/mazurowski-lab/segment-anything-medical-evaluation 上公开获取。