Hwang Joonil, Chun Jaehee, Cho Seungryong, Kim Joo-Ho, Cho Min-Seok, Choi Seo Hee, Kim Jin Sung
Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea.
Medical Image and Radiotherapy Lab (MIRLAB), Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea.
Adv Radiat Oncol. 2024 Jul 26;9(10):101580. doi: 10.1016/j.adro.2024.101580. eCollection 2024 Oct.
Herein, we developed a deep learning algorithm to improve the segmentation of the clinical target volume (CTV) on daily cone beam computed tomography (CBCT) scans in breast cancer radiation therapy. By leveraging the Intentional Deep Overfit Learning (IDOL) framework, we aimed to enhance personalized image-guided radiation therapy based on patient-specific learning.
We used 240 CBCT scans from 100 breast cancer patients and employed a 2-stage training approach. The first stage involved training a novel general deep learning model (Swin UNETR, UNET, and SegResNET) on 90 patients. The second stage used intentional overfitting on the remaining 10 patients for patient-specific CBCT outputs. Quantitative evaluation was conducted using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), and independent samples test with expert contours on CBCT scans from the first to 15th fractions.
IDOL integration significantly improved CTV segmentation, particularly with the Swin UNETR model ( values < .05). Using patient-specific data, IDOL enhanced the DSC, HD, and MSD metrics. The average DSC for the 15th fraction improved from 0.9611 to 0.9819, the average HD decreased from 4.0118 mm to 1.3935 mm, and the average MSD decreased from 0.8723 to 0.4603. Incorporating CBCT scans from the initial treatments and first to third fractions further improved results, with an average DSC of 0.9850, an average HD of 1.2707 mm, and an average MSD of 0.4076 for the 15th fraction, closely aligning with physician-drawn contours.
Compared with a general model, our patient-specific deep learning-based training algorithm significantly improved CTV segmentation accuracy of CBCT scans in patients with breast cancer. This approach, coupled with continuous deep learning training using daily CBCT scans, demonstrated enhanced CTV delineation accuracy and efficiency. Future studies should explore the adaptability of the IDOL framework to diverse deep learning models, data sets, and cancer sites.
在此,我们开发了一种深度学习算法,以改进乳腺癌放射治疗中每日锥束计算机断层扫描(CBCT)上临床靶区(CTV)的分割。通过利用有意深度过拟合学习(IDOL)框架,我们旨在基于患者特异性学习增强个性化图像引导放射治疗。
我们使用了来自100例乳腺癌患者的240次CBCT扫描,并采用两阶段训练方法。第一阶段涉及在90例患者上训练一种新型通用深度学习模型(Swin UNETR、UNET和SegResNET)。第二阶段对其余10例患者进行有意过拟合,以获得患者特异性CBCT输出。使用骰子相似系数(DSC)、豪斯多夫距离(HD)、平均表面距离(MSD)以及对第1至15分次CBCT扫描上的专家轮廓进行独立样本检验进行定量评估。
IDOL整合显著改善了CTV分割,特别是使用Swin UNETR模型时(P值<0.05)。使用患者特异性数据,IDOL提高了DSC、HD和MSD指标。第15分次的平均DSC从0.9611提高到0.9819,平均HD从4.0118毫米降至1.3935毫米,平均MSD从0.8723降至0.4603。纳入初始治疗以及第1至3分次的CBCT扫描进一步改善了结果,第15分次的平均DSC为0.9850,平均HD为1.2707毫米,平均MSD为0.4076,与医生绘制的轮廓紧密对齐。
与通用模型相比,我们基于患者特异性深度学习的训练算法显著提高了乳腺癌患者CBCT扫描的CTV分割准确性。这种方法,再加上使用每日CBCT扫描进行持续深度学习训练,证明了CTV勾画准确性和效率的提高。未来的研究应探索IDOL框架对不同深度学习模型、数据集和癌症部位的适应性。