Yang Yulin, Chen Qingqing, Li Yinhao, Wang Fang, Han Xian-Hua, Iwamoto Yutaro, Liu Jing, Lin Lanfen, Hu Hongjie, Chen Yen-Wei
IEEE J Biomed Health Inform. 2024 Aug;28(8):4737-4750. doi: 10.1109/JBHI.2024.3403199. Epub 2024 Aug 6.
Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.
尽管对比增强计算机断层扫描(CE-CT)图像显著提高了局灶性肝病变(FLL)的诊断准确性,但造影剂的使用给患者带来了相当大的身体负担。利用生成模型从非增强CT图像合成CE-CT图像提供了一个有前景的解决方案。然而,现有的图像合成模型往往忽视关键区域的重要性,不可避免地降低了它们在下游任务中的有效性。为了克服这一挑战,我们提出了一种创新的CE-CT图像合成模型,称为分割引导交叉双解码生成对抗网络(SGCDD-GAN)。具体而言,SGCDD-GAN包括一个交叉双解码生成器,其中包括一个注意力解码器和一个改进的变换解码器。注意力解码器旨在突出腹腔内的一些关键区域,而改进的变换解码器负责合成CE-CT图像。这两个解码器使用交叉技术相互连接,以增强彼此的能力。此外,我们采用多任务学习策略来引导生成器更多地关注病变区域。为了评估所提出的SGCDD-GAN的性能,我们在一个内部CE-CT数据集上对其进行了测试。在两个CE-CT图像合成任务中,即合成动脉期(ART)图像和合成门静脉期(PV)图像,所提出的SGCDD-GAN在整个图像和肝脏区域的性能指标均表现优异,包括结构相似性指数(SSIM)、峰值信噪比(PSNR)、均方误差(MSE)和皮尔逊相关系数(PCC)得分。此外,从我们的SGCDD-GAN合成的CE-CT图像在基于深度学习的FLL分类任务中,以及在由两位放射科医生进行的初步评估中,分别达到了82.68%、94.11%和94.11%的显著准确率。