Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA.
Med Phys. 2021 Nov;48(11):7063-7073. doi: 10.1002/mp.15264. Epub 2021 Oct 13.
The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy.
To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS).
Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds.
We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.
在基于锥束 CT(CBCT)的自适应放疗治疗计划中,勾画危及器官(OAR)是基础,但该过程耗时、费力,且存在操作者间的差异。我们研究了一种基于深度学习的快速多器官勾画方法,用于 CBCT 引导的自适应胰腺放疗。
为了提高 OAR 勾画的准确性,本研究提出了两种创新解决方案。首先,不是直接在 CBCT 图像上进行器官分割,而是应用预先训练的循环一致性生成对抗网络(cycleGAN),根据 CBCT 图像生成合成 CT 图像。其次,应用一种称为掩模评分区域卷积神经网络(MS R-CNN)的先进深度学习模型,同时检测多个器官的位置和形状,最终进行分割。所提出的方法勾画的 OAR 轮廓与专家勾画的轮廓进行验证,并使用 Dice 相似系数(DSC)、第 95 百分位 Hausdorff 距离(HD95)、平均表面距离(MSD)和残差均方距离(RMS)进行几何一致性比较。
在包括十二指肠、大肠、小肠、左肾、右肾、肝脏、脊髓和胃在内的 8 个腹部 OAR 中,自动勾画和专家勾画轮廓的几何比较如下:0.92(0.89-0.97)平均 DSC、2.90mm(1.63-4.19mm)平均 HD95、0.89mm(0.61-1.36mm)平均 MSD 和 1.43mm(0.90-2.10mm)平均 RMS。与竞争方法相比,我们的方法在所有 8 个器官的所有指标上都有显著提高(p<0.05)。一旦模型训练完成,8 个 OAR 的轮廓可以在几秒钟内获得。
我们展示了基于合成 CT 的深度学习框架用于在 CBCT 上自动勾画多个 OAR 的可行性。该方法可应用于胰腺自适应放疗,以快速、准确地勾画 OAR。