Department of Radiology, Columbia University Irving Medical Center, 622 W 168 11th St, New York, NY, USA.
Department of Information and Computer Engineering, Chung Yuan Christian University, Chung Li District, 200 Chung Pei Road, Taoyuan City, Taiwan.
J Digit Imaging. 2021 Oct;34(5):1199-1208. doi: 10.1007/s10278-021-00510-w. Epub 2021 Sep 10.
We developed a deep learning-based super-resolution model for prostate MRI. 2D T2-weighted turbo spin echo (T2w-TSE) images are the core anatomical sequences in a multiparametric MRI (mpMRI) protocol. These images have coarse through-plane resolution, are non-isotropic, and have long acquisition times (approximately 10-15 min). The model we developed aims to preserve high-frequency details that are normally lost after 3D reconstruction. We propose a novel framework for generating isotropic volumes using generative adversarial networks (GAN) from anisotropic 2D T2w-TSE and single-shot fast spin echo (ssFSE) images. The CycleGAN model used in this study allows the unpaired dataset mapping to reconstruct super-resolution (SR) volumes. Fivefold cross-validation was performed. The improvements from patch-to-volume reconstruction (PVR) to SR are 80.17%, 63.77%, and 186% for perceptual index (PI), RMSE, and SSIM, respectively; the improvements from slice-to-volume reconstruction (SVR) to SR are 72.41%, 17.44%, and 7.5% for PI, RMSE, and SSIM, respectively. Five ssFSE cases were used to test for generalizability; the perceptual quality of SR images surpasses the in-plane ssFSE images by 37.5%, with 3.26% improvement in SSIM and a higher RMSE by 7.92%. SR images were quantitatively assessed with radiologist Likert scores. Our isotropic SR volumes are able to reproduce high-frequency detail, maintaining comparable image quality to in-plane TSE images in all planes without sacrificing perceptual accuracy. The SR reconstruction networks were also successfully applied to the ssFSE images, demonstrating that high-quality isotropic volume achieved from ultra-fast acquisition is feasible.
我们开发了一种基于深度学习的前列腺 MRI 超分辨率模型。二维 T2 加权涡轮自旋回波(T2w-TSE)图像是多参数 MRI(mpMRI)协议中的核心解剖序列。这些图像具有较粗的层间分辨率,各向异性,采集时间较长(约 10-15 分钟)。我们开发的模型旨在保留经过 3D 重建后通常丢失的高频细节。我们提出了一种使用生成对抗网络(GAN)从各向异性二维 T2w-TSE 和单次快速自旋回波(ssFSE)图像生成各向同性体积的新框架。本研究中使用的 CycleGAN 模型允许对未配对数据集进行映射,以重建超分辨率(SR)体积。进行了五重交叉验证。从斑块到体积重建(PVR)到 SR 的改进分别为感知指数(PI)、均方根误差(RMSE)和结构相似性指数(SSIM)的 80.17%、63.77%和 186%;从切片到体积重建(SVR)到 SR 的改进分别为 PI、RMSE 和 SSIM 的 72.41%、17.44%和 7.5%。使用五例 ssFSE 病例进行了通用性测试;SR 图像的感知质量比平面内 ssFSE 图像高 37.5%,SSIM 提高了 3.26%,RMSE 提高了 7.92%。使用放射科医生李克特评分对 SR 图像进行了定量评估。我们的各向同性 SR 体积能够再现高频细节,在所有平面上保持与平面内 TSE 图像相当的图像质量,而不会牺牲感知准确性。SR 重建网络也成功应用于 ssFSE 图像,表明从超快速采集获得高质量各向同性体积是可行的。