Jiang Jue, Riyahi Alam Sadegh, Chen Ishita, Zhang Perry, Rimner Andreas, Deasy Joseph O, Veeraraghavan Harini
Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY, 1006, USA.
Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY, 1006, USA.
Med Phys. 2021 Jul;48(7):3702-3713. doi: 10.1002/mp.14902. Epub 2021 May 25.
Despite the widespread availability of in-treatment room cone beam computed tomography (CBCT) imaging, due to the lack of reliable segmentation methods, CBCT is only used for gross set up corrections in lung radiotherapies. Accurate and reliable auto-segmentation tools could potentiate volumetric response assessment and geometry-guided adaptive radiation therapies. Therefore, we developed a new deep learning CBCT lung tumor segmentation method.
The key idea of our approach called cross-modality educed distillation (CMEDL) is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training to extract more informative features during training. We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDNs) using unpaired CBCT and MRI datasets. UDA approach uses CBCT and MRI that are not aligned and may arise from different sets of patients. The UDA network synthesizes pseudo MRI from CBCT images. The SDN consists of teacher MRI and student CBCT segmentation networks. Feature distillation regularizes the student network to extract CBCT features that match the statistical distribution of MRI features extracted by the teacher network and obtain better differentiation of tumor from background. The UDA network was implemented with a cycleGAN improved with contextual losses separately on Unet and dense fully convolutional segmentation networks (DenseFCN). Performance comparisons were done against CBCT only using 2D and 3D networks. We also compared against an alternative framework that used UDA with MR segmentation network, whereby segmentation was done on the synthesized pseudo MRI representation. All networks were trained with 216 weekly CBCTs and 82 T2-weighted turbo spin echo MRI acquired from different patient cohorts. Validation was done on 20 weekly CBCTs from patients not used in training. Independent testing was done on 38 weekly CBCTs from patients not used in training or validation. Segmentation accuracy was measured using surface Dice similarity coefficient (SDSC) and Hausdroff distance at 95th percentile (HD95) metrics.
The CMEDL approach significantly improved (p < 0.001) the accuracy of both Unet (SDSC of 0.83 ± 0.08; HD95 of 7.69 ± 7.86 mm) and DenseFCN (SDSC of 0.75 ± 0.13; HD95 of 11.42 ± 9.87 mm) over CBCT only 2DUnet (SDSC of 0.69 ± 0.11; HD95 of 21.70 ± 16.34 mm), 3D Unet (SDSC of 0.72 ± 0.20; HD95 15.01 ± 12.98 mm), and DenseFCN (SDSC of 0.66 ± 0.15; HD95 of 22.15 ± 17.19 mm) networks. The alternate framework using UDA with the MRI network was also more accurate than the CBCT only methods but less accurate the CMEDL approach.
Our results demonstrate feasibility of the introduced CMEDL approach to produce reasonably accurate lung cancer segmentation from CBCT images. Further validation on larger datasets is necessary for clinical translation.
尽管治疗室内锥形束计算机断层扫描(CBCT)成像已广泛应用,但由于缺乏可靠的分割方法,CBCT仅用于肺癌放疗中的大体摆位校正。准确可靠的自动分割工具可增强容积反应评估和几何形状引导的自适应放射治疗。因此,我们开发了一种新的深度学习CBCT肺肿瘤分割方法。
我们的方法称为跨模态诱导蒸馏(CMEDL),其关键思想是在训练期间使用磁共振成像(MRI)来指导CBCT分割网络训练,以提取更多信息特征。我们通过使用未配对的CBCT和MRI数据集训练一个由未配对域自适应(UDA)和跨域分割蒸馏网络(SDN)组成的端到端网络来实现这一点。UDA方法使用未对齐且可能来自不同患者组的CBCT和MRI。UDA网络从CBCT图像合成伪MRI。SDN由教师MRI和学生CBCT分割网络组成。特征蒸馏使学生网络正规化,以提取与教师网络提取的MRI特征统计分布相匹配的CBCT特征,并更好地将肿瘤与背景区分开来。UDA网络在Unet和密集全卷积分割网络(DenseFCN)上分别使用上下文损失改进的cycleGAN实现。仅使用2D和3D网络对CBCT进行性能比较。我们还与使用UDA和MR分割网络的替代框架进行了比较,该框架在合成的伪MRI表示上进行分割。所有网络均使用从不同患者队列中获取的216份每周CBCT和82份T2加权快速自旋回波MRI进行训练。对未用于训练的患者的20份每周CBCT进行验证。对未用于训练或验证的患者的38份每周CBCT进行独立测试。使用表面骰子相似系数(SDSC)和第95百分位数的豪斯多夫距离(HD95)指标测量分割准确性。
与仅使用CBCT的2D Unet(SDSC为0.69±0.11;HD95为21.70±16.34mm)、3D Unet(SDSC为0.72±0.20;HD95为15.01±12.98mm)和DenseFCN(SDSC为0.66±0.15;HD95为22.15±17.19mm)网络相比,CMEDL方法显著提高了(p<0.001)Unet(SDSC为0.83±0.08;HD95为7.69±7.86mm)和DenseFCN(SDSC为0.75±0.13;HD95为11.42±9.87mm)的准确性。使用UDA和MRI网络的替代框架也比仅使用CBCT的方法更准确,但比CMEDL方法准确性低。
我们的结果证明了引入的CMEDL方法从CBCT图像中进行合理准确的肺癌分割的可行性。需要在更大的数据集上进行进一步验证以实现临床转化。