Jue Jiang, Jason Hu, Neelam Tyagi, Andreas Rimner, Sean Berry L, Joseph Deasy O, Harini Veeraraghavan
Medical Physics, Memorial Sloan Kettering Cancer Center.
Radiation Oncology, Memorial Sloan Kettering Cancer Center.
Med Image Comput Comput Assist Interv. 2019 Oct;11769:221-229. doi: 10.1007/978-3-030-32226-7_25. Epub 2019 Oct 10.
Lung tumors, especially those located close to or surrounded by soft tissues like the mediastinum, are difficult to segment due to the low soft tissue contrast on computed tomography images. Magnetic resonance images contain superior soft-tissue contrast information that can be leveraged if both modalities were available for training. Therefore, we developed a cross-modality educed learning approach where MR information that is educed from CT is used to hallucinate MRI and improve CT segmentation. Our approach, called cross-modality educed deep learning segmentation (CMEDL) combines CT and pseudo MR produced from CT by aligning their features to obtain segmentation on CT. Features computed in the last two layers of parallelly trained CT and MR segmentation networks are aligned. We implemented this approach on U-net and dense fully convolutional networks (dense-FCN). Our networks were trained on unrelated cohorts from open-source the Cancer Imaging Archive CT images (N=377), an internal archive T2-weighted MR (N=81), and evaluated using separate validation (N=304) and testing (N=333) CT-delineated tumors. Our approach using both networks were significantly more accurate (U-net < 0.001; denseFCN < 0.001) than CT-only networks and achieved an accuracy (Dice similarity coefficient) of 0.71±0.15 (U-net), 0.74±0.12 (denseFCN) on validation and 0.72±0.14 (U-net), 0.73±0.12 (denseFCN) on the testing sets. Our novel approach demonstrated that educing cross-modality information through learned priors enhances CT segmentation performance.
肺部肿瘤,尤其是那些位于靠近纵隔或被纵隔等软组织包围的肿瘤,由于计算机断层扫描(CT)图像上软组织对比度低,难以进行分割。磁共振成像(MRI)包含 superior 的软组织对比信息,如果两种模态都可用于训练,则可以利用这些信息。因此,我们开发了一种跨模态导出学习方法,其中从CT导出的MR信息用于生成MRI并改善CT分割。我们的方法称为跨模态导出深度学习分割(CMEDL),它通过对齐CT和由CT生成的伪MR的特征来获得CT上的分割,从而将CT和伪MR结合起来。在并行训练的CT和MR分割网络的最后两层中计算的特征进行对齐。我们在U-net和密集全卷积网络(dense-FCN)上实现了这种方法。我们的网络在来自开源癌症成像存档CT图像(N = 377)、内部存档T2加权MR(N = 81)的不相关队列上进行训练,并使用单独的验证(N = 304)和测试(N = 333)CT勾勒的肿瘤进行评估。我们使用这两种网络的方法比仅使用CT的网络具有更高的准确性(U-net < 0.001;denseFCN < 0.001),在验证集上的准确率(Dice相似系数)为0.71±0.15(U-net)、0.74±0.12(denseFCN),在测试集上为0.72±0.14(U-net)、0.73±0.12(denseFCN)。我们的新方法表明,通过学习先验导出跨模态信息可提高CT分割性能。