Lv Cheng, Shu Xu-Jun, Qiu Jun, Xiong Zi-Cheng, Bo Ye Jing, Bo Li Shang, Chen Sheng-Bo, Rao Hong
School of Mathematics and Computer Sciences, Nanchang University, Nanchang, Jiangxi Province, China.
Department of Neurosurgery, Nanjing Jinling Hospital, Nanjing, Jiangsu Province, China.
Med Phys. 2025 Jul;52(7):e17958. doi: 10.1002/mp.17958.
Medical image segmentation is a fundamental task in medical image analysis and has been widely applied in multiple medical fields. The latest transformer-based deep learning segmentation model, Segment Anything Model (SAM), has demonstrated outstanding performance in natural image segmentation tasks through large-scale pre-training, achieving zero-shot image semantic understanding and pixel-level segmentation. However, medical images present challenges such as style variability, ill-defined object boundaries, and feature ambiguities, limiting the direct applicability of the SAM to medical image segmentation tasks.
To enhance the robustness of the SAM in the domain of medical segmentation, we propose the SAM-RCCF framework. This approach aims to enhance the generalizability and precision of segmentation performance across diverse intracranial tumor types, including gliomas, metastatic tumors, and meningiomas.
The study collected 484 axial T1-weighted contrast-enhanced (T1CE) magnetic resonance imaging (MRI) data of brain tumor patients, including 164 cases of glioma, 158 cases of metastatic tumors, and 162 cases of meningioma. All imaging data were randomly divided into training and testing sets. We employed the proposed SAM-RCCF model to perform segmentation experiments on these data, and five-fold cross-validation was adopted to evaluate the model's performance. This framework integrates the RefineNet module and the conditional control field with a conditional controller and Mask generator, enabling precise feature recognition and tailored segmentation for medical images, optimizing segmentation accuracy RESULTS: In the glioma segmentation experiment, the SAM-RCCF model achieved outstanding performance with an IOU of 0.90, DSC of 0.912, and HD of 13.13. For the meningioma segmentation task, it obtained an IOU of 0.9214, DSC of 0.93, and HD of 11.41, significantly outperforming other classic segmentation models.
The segmentation experiment results demonstrate that in the segmentation tasks of glioma, metastatic tumors, and meningioma MRI images, the SAM-RCCF algorithm significantly outperformed the original SAM in terms of DSC, HD, and IOU segmentation performance metrics. The experimental results verify the effectiveness of the SAM-RCCF framework in segmenting complex and variable brain tumor images, enhancing segmentation accuracy and robustness.
医学图像分割是医学图像分析中的一项基础任务,已在多个医学领域得到广泛应用。最新的基于Transformer的深度学习分割模型,即分割一切模型(SAM),通过大规模预训练在自然图像分割任务中展现出卓越性能,实现了零样本图像语义理解和像素级分割。然而,医学图像存在风格多变、物体边界不清晰以及特征模糊等挑战,限制了SAM直接应用于医学图像分割任务。
为提高SAM在医学分割领域的鲁棒性,我们提出了SAM - RCCF框架。该方法旨在增强跨多种颅内肿瘤类型(包括胶质瘤、转移瘤和脑膜瘤)的分割性能的通用性和精度。
本研究收集了484例脑肿瘤患者的轴向T1加权对比增强(T1CE)磁共振成像(MRI)数据,其中包括164例胶质瘤、158例转移瘤和162例脑膜瘤。所有影像数据随机分为训练集和测试集。我们使用所提出的SAM - RCCF模型对这些数据进行分割实验,并采用五折交叉验证来评估模型性能。该框架将RefineNet模块和条件控制场与条件控制器和掩码生成器相结合,实现对医学图像的精确特征识别和定制分割,优化分割精度。
在胶质瘤分割实验中,SAM - RCCF模型表现出色,交并比(IOU)为0.90,骰子相似系数(DSC)为0.912,豪斯多夫距离(HD)为13.13。对于脑膜瘤分割任务,其IOU为0.9214,DSC为0.93,HD为11.41,显著优于其他经典分割模型。
分割实验结果表明,在胶质瘤、转移瘤和脑膜瘤MRI图像的分割任务中,SAM - RCCF算法在DSC、HD和IOU分割性能指标方面明显优于原始的SAM。实验结果验证了SAM - RCCF框架在分割复杂多变的脑肿瘤图像方面的有效性,提高了分割精度和鲁棒性。