Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
Med Phys. 2021 Jul;48(7):3916-3926. doi: 10.1002/mp.14946. Epub 2021 Jun 2.
Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images.
We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models.
The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90 ± 0.15, the peak signal-to-noise ratio (PSNR) value of 37.88 ± 0.88 dB, and the visual information fidelity (VIF) value of 0.69 ± 0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases.
We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.
超声(US)成像已广泛应用于诊断、图像引导介入和治疗,其中非常需要从稀疏采集的二维(2D)图像中获得高质量的三维(3D)图像。本研究旨在开发一种基于深度学习的算法,仅依赖于所采集的稀疏分布的 2D 图像来重建高分辨率(HR)3D US 图像。
我们提出了一种使用循环一致性生成对抗网络(cycleGAN)的自监督学习框架,其中两个独立的 cycleGAN 模型分别使用配对的原始 US 图像和两组低分辨率(LR)US 图像进行训练。两组 LR US 图像分别通过沿两个轴对原始 US 图像进行下采样获得。在 US 成像中,平面内空间分辨率通常远高于平面外分辨率。通过学习从下采样的平面内 LR 图像到原始 HR US 图像的映射,cycleGAN 可以从原始稀疏分布的 2D 图像生成平面外 HR 图像。最后,通过组合来自两个 cycleGAN 模型的生成的 2D 图像来重建 HR 3D US 图像。
该方法在两个不同的数据集上进行了评估。一个是来自 70 名乳腺癌患者的自动乳腺超声(ABUS)图像,另一个是来自 45 名前列腺癌患者的图像。在乳腺病例中应用空间分辨率增强因子 3 时,我们提出的方法的平均绝对误差(MAE)值为 0.90±0.15,峰值信噪比(PSNR)值为 37.88±0.88dB,视觉信息保真度(VIF)值为 0.69±0.01,明显优于双线性插值。在这些乳腺病例中使用增强因子 5 以及在前列腺病例中使用增强因子 5 和 10 时,也获得了类似的性能。
我们提出并研究了一种新的基于深度学习的算法,用于从稀疏采集的 2D 图像重建 HR 3D US 图像。仅使用所采集的 2D 图像,而无需任何外部图谱图像,就可以显著提高平面外分辨率。其自监督能力可以加速 HR US 成像。