Yu Yarong, Wu Dijia, Lan Ziting, Dai Xiaoting, Yang Wenli, Yuan Jiajun, Xu Zhihan, Wang Jiayu, Tao Ze, Ling Runjianya, Zhang Su, Zhang Jiayin
Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
Eur Radiol. 2024 Dec 20. doi: 10.1007/s00330-024-11288-0.
To develop and validate deep learning (DL)-models that denoise late iodine enhancement (LIE) images and enable accurate extracellular volume (ECV) quantification.
This study retrospectively included patients with chest discomfort who underwent CT myocardial perfusion + CT angiography + LIE from two hospitals. Two DL models, residual dense network (RDN) and conditional generative adversarial network (cGAN), were developed and validated. 423 patients were randomly divided into training (182 patients), tuning (48 patients), internal validation (92 patients) and external validation group (101 patients). LIE (single-stack image), LIE (averaging multiple-stack images), LIE (single-stack image denoised by RDN) and LIE (single-stack image denoised by cGAN) were generated. We compared image quality score, signal-to-noise (SNR) and contrast-to-noise (CNR) of four LIE sets. The identifiability of denoised images for positive LIE and increased ECV (> 30%) was assessed.
The image quality of LIE (SNR: 13.3 ± 1.9; CNR: 4.5 ± 1.1) and LIE (SNR: 20.5 ± 4.7; CNR: 7.5 ± 2.3) images was markedly better than that of LIE (SNR: 4.4 ± 0.7; CNR: 1.6 ± 0.4). At per-segment level, the area under the curve (AUC) of LIE images for LIE evaluation was significantly improved compared with those of LIE and LIE images (p = 0.040 and p < 0.001, respectively). Meanwhile, the AUC and accuracy of ECV were significantly higher than those of ECV and ECV at per-segment level (p < 0.001 for all).
RDN model generated denoised LIE images with markedly higher SNR and CNR than the cGAN-model and original images, which significantly improved the identifiability of visual analysis. Moreover, using denoised single-stack images led to accurate CT-ECV quantification.
Question Can the developed models denoise CT-derived late iodine enhancement high images and improve signal-to-noise ratio? Findings The residual dense network model significantly improved the image quality for late iodine enhancement and enabled accurate CT- extracellular volume quantification. Clinical relevance The residual dense network model generates denoised late iodine enhancement images with the highest signal-to-noise ratio and enables accurate quantification of extracellular volume.
开发并验证深度学习(DL)模型,该模型可对延迟碘增强(LIE)图像进行去噪,并实现细胞外容积(ECV)的准确量化。
本研究回顾性纳入了两家医院中因胸部不适接受CT心肌灌注+CT血管造影+LIE检查的患者。开发并验证了两种DL模型,即残差密集网络(RDN)和条件生成对抗网络(cGAN)。423例患者被随机分为训练组(182例患者)、调整组(48例患者)、内部验证组(92例患者)和外部验证组(101例患者)。生成了LIE(单帧图像)、LIE(多帧图像平均)、LIE(经RDN去噪的单帧图像)和LIE(经cGAN去噪的单帧图像)。我们比较了四组LIE图像的图像质量评分、信噪比(SNR)和对比噪声比(CNR)。评估了去噪图像对阳性LIE和ECV增加(>30%)的可识别性。
LIE(SNR:13.3±1.9;CNR:4.5±1.1)和LIE(SNR:20.5±4.7;CNR:7.5±2.3)图像的质量明显优于LIE(SNR:4.4±0.7;CNR:1.6±0.4)图像。在每节段水平上,与LIE和LIE图像相比,LIE图像用于LIE评估的曲线下面积(AUC)显著提高(分别为p=0.040和p<0.001)。同时,在每节段水平上,ECV的AUC和准确性显著高于ECV和ECV(所有p均<0.001)。
RDN模型生成的去噪LIE图像的SNR和CNR明显高于cGAN模型和原始图像,显著提高了视觉分析的可识别性。此外,使用去噪后的单帧图像可实现准确的CT-ECV量化。
问题所开发的模型能否对CT衍生的延迟碘增强高图像进行去噪并提高信噪比?研究结果残差密集网络模型显著提高了延迟碘增强的图像质量,并实现了准确的CT-细胞外容积量化。临床意义残差密集网络模型生成的去噪延迟碘增强图像具有最高的信噪比,并能够准确量化细胞外容积。