Baydoun Atallah, Xu K E, Heo Jin Uk, Yang Huan, Zhou Feifei, Bethell Latoya A, Fredman Elisha T, Ellis Rodney J, Podder Tarun K, Traughber Melanie S, Paspulati Raj M, Qian Pengjiang, Traughber Bryan J, Muzic Raymond F
Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, OH 44106, USA.
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
IEEE Access. 2021;9:17208-17221. doi: 10.1109/access.2021.3049781. Epub 2021 Jan 8.
Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of F-labeled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient's radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach -termed sU-cGAN-uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN's input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.
多模态成像构成了精准医学的基础,尤其是在肿瘤学领域,为确保充分的诊断和治疗,需要可靠且快速的成像技术。在宫颈癌中,精准肿瘤学需要获取F标记的2-氟-2-脱氧-D-葡萄糖(FDG)正电子发射断层扫描(PET)、磁共振(MR)和计算机断层扫描(CT)图像。此后,对图像进行配准,以获得FDG-PET衰减校正和放射治疗计划所需的电子密度属性。然而,这种传统方法存在MR-CT配准缺陷,增加了治疗费用,并增加了患者的辐射暴露。为克服这些缺点,我们提出了一种用于跨模态图像合成的新框架,并将其应用于宫颈癌诊断和治疗的MR-CT图像转换。该框架基于条件生成对抗网络(cGAN),展示了一种新颖的策略,该策略简单但有效地解决了深度学习中梯度消失与特征提取的范式问题。其贡献总结如下:1)该方法——称为sU-cGAN——首次使用编码器/解码器深度为2的浅U-Net(sU-Net)作为生成器;2)sU-cGAN的输入是用于放射诊断的相同MR序列,即T2加权、快速自旋回波单次激发(TSE-SSH)MR图像;3)尽管训练数据有限且采用单输入通道方法,但sU-cGAN优于其他现有深度学习方法,并能够准确生成合成CT(sCT)。总之,建议的框架应在临床环境中进一步研究。此外,sU-Net模型在其他计算机视觉任务中值得探索。