Wu Yizhou, Li Yuheng, Hu Mingzhe, Chang Chih-Wei, Qiu Richard L J, Wang Tonghe, Shu Hui-Kuo, Mao Hui, Tian Zhen, Yang Xiaofeng
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA.
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA.
Med Phys. 2025 Jul;52(7):e18001. doi: 10.1002/mp.18001.
Brain metastases are a prevalent and serious complication in cancer patients. For effective treatment planning, precise segmentation is required. While conventional standard neural networks have improved automation, they struggle to detect small metastases without increasing false positives. Addressing this challenge is critical for enhancing clinical outcomes in radiotherapy.
Accurate segmentation of brain metastases in magnetic resonance imaging (MRI) is crucial for clinical decision-making and treatment planning. Existing deep learning methods, such as nnUNet, have limited power in sensitively detecting small metastatic lesions without increasing false positives. This study proposes a novel deep learning model aimed at addressing this challenge.
We developed 3D-MedDCNet, a deep learning architecture incorporating deformable convolutions for brain metastasis detection and segmentation. We evaluated its performance against state-of-the-art methods on two datasets: the UCSF Brain Metastases Dataset, comprising 560 MRI scans, and the BraTS-METS 2023 Dataset, comprising 1,297 MRI scans with expert-annotated multi-sequence tumor segmentations. Models were assessed using Sensitivity, Precision, Lesion-wise Dice, Patient-wise Dice, and False Positive Rate metrics. The training followed nnUNet's default pipeline, with specific modifications to integrate 3D deformable convolutions (3D-DCN) at the deepest encoder stage. We also conducted ablation studies to quantify the impact of 3D-DCN and benchmarked our model against state-of-the-art methods.
The developed 3D-MedDCNet outperformed two state-of-the-art methods across all evaluation metrics. It achieved lesion-wise Dice scores of 0.80 ± 0.01 (UCSF) and 0.76 ± 0.01 (BraTS), patient-wise Dice of 0.87 ± 0.01 and 0.82 ± 0.02, sensitivities of 0.84 ± 0.01 and 0.76 ± 0.01, and significantly lower false positive rates of 0.06 ± 0.02 and 0.14 ± 0.01, respectively. Ablation studies confirmed that 3D-DCN enhances sensitivity while maintaining precision, leading to superior segmentation.
3D-MedDCNet improved detection sensitivity and segmentation accuracy for brain metastases in MRI over existing state-of-the-art models. This approach enables more reliable automated segmentation and detection of small metastatic lesions for quantifying and staging metastatic diseases, as well as image-guiding radiation treatment. Future work will focus on validating the model across diverse datasets, exploring foundational models to improve feature representation, and investigating instance-wise segmentation strategies for enhanced detection and precision.
脑转移是癌症患者中常见且严重的并发症。为了进行有效的治疗规划,需要精确的分割。虽然传统的标准神经网络提高了自动化程度,但它们在不增加假阳性的情况下难以检测到小的转移灶。应对这一挑战对于提高放射治疗的临床效果至关重要。
磁共振成像(MRI)中脑转移灶的准确分割对于临床决策和治疗规划至关重要。现有的深度学习方法,如nnUNet,在敏感检测小的转移病灶而不增加假阳性方面能力有限。本研究提出了一种旨在应对这一挑战的新型深度学习模型。
我们开发了3D-MedDCNet,这是一种深度学习架构,结合了可变形卷积用于脑转移灶的检测和分割。我们在两个数据集上评估了它与现有最先进方法的性能:UCSF脑转移数据集,包含560次MRI扫描;以及BraTS-METS 2023数据集,包含1297次MRI扫描以及专家标注的多序列肿瘤分割。使用灵敏度、精确率、病灶层面骰子系数、患者层面骰子系数和假阳性率指标对模型进行评估。训练遵循nnUNet的默认流程,并进行了特定修改以在最深的编码器阶段集成3D可变形卷积(3D-DCN)。我们还进行了消融研究以量化3D-DCN的影响,并将我们的模型与现有最先进方法进行基准测试。
所开发的3D-MedDCNet在所有评估指标上均优于两种现有最先进方法。它在UCSF数据集上病灶层面骰子系数达到0.80±0.01(BraTS数据集上为0.76±0.01),患者层面骰子系数为0.87±0.01和0.82±0.02,灵敏度分别为0.84±0.01和0.76±0.01,假阳性率显著更低,分别为0.06±0.02和0.14±0.01。消融研究证实3D-DCN在保持精确率的同时提高了灵敏度,从而实现了更优的分割。
与现有的最先进模型相比,3D-MedDCNet提高了MRI中脑转移灶检测的灵敏度和分割精度。这种方法能够更可靠地自动分割和检测小的转移病灶,用于转移疾病的量化和分期,以及图像引导放射治疗。未来的工作将集中在跨不同数据集验证模型、探索基础模型以改善特征表示以及研究实例层面分割策略以提高检测和精度。