Safari Mojtaba, Wang Shansong, Eidex Zach, Li Qiang, Qiu Richard L J, Middlebrooks Erik H, Yu David S, Yang Xiaofeng
Department of Radiation Oncology and Winship Cancer Institute, Emory University, 1365 Clifton Rd NE Building C, Atlanta, Georgia, 30322, UNITED STATES.
Emory University School of Medicine, 1365 Clifton Rd N E, Atlanta, Georgia, 30322, UNITED STATES.
Phys Med Biol. 2025 Jun 3. doi: 10.1088/1361-6560/ade049.
Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction.
We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I2SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation.
Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I2SB required approximately 20 and 38 seconds per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores, 4.14±0.77(brain) and 4.80±0.40(prostate).
Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available at https://github.com/mosaf/Res-SRDiff.
磁共振成像(MRI)在临床和研究环境中至关重要,可提供出色的软组织对比度。然而,长时间的采集时间常常导致患者不适和运动伪影。基于扩散的深度学习超分辨率(SR)技术可从低分辨率(LR)图像对重建高分辨率(HR)图像,但它们涉及大量采样步骤,限制了实时应用。为克服这些问题,本研究引入了一种残差误差转移机制,在保持重要解剖细节的同时显著减少采样步骤,从而加速MRI重建。
我们开发了Res-SRDiff,这是一种基于扩散的新型SR框架,将残差误差转移纳入前向扩散过程。这种整合使退化的HR和LR分布对齐,从而实现高效的HR图像重建。我们使用超高场脑T1 MP2RAGE图谱和T2加权前列腺图像评估了Res-SRDiff,并将其与双立方、Pix2pix、CycleGAN、SPSR、I2SB和TM-DDPM方法进行了基准测试。定量评估采用峰值信噪比(PSNR)、结构相似性指数(SSIM)、梯度幅度相似性偏差(GMSD)和学习感知图像块相似性(LPIPS)。此外,我们通过消融研究对所提出框架的各个组件进行了定性和定量评估,并进行了基于李克特量表的图像质量评估。
在两个数据集的PSNR、SSIM和GMSD方面,Res-SRDiff显著超过了大多数比较方法,具有统计学意义上的显著改进(p值远小于0.05)。该模型仅使用四个采样步骤就实现了高保真图像重建,将计算时间大幅缩短至每切片不到一秒。相比之下,TM-DDPM和I2SB等传统方法每切片分别需要约20秒和38秒。定性分析表明,Res-SRDiff有效地保留了精细的解剖细节和病变形态。李克特研究表明,我们的方法得分最高,脑图像为4.14±0.77,前列腺图像为4.80±0.40。
Res-SRDiff展示了效率和准确性,显著提高了计算速度和图像质量。将残差误差转移纳入基于扩散的SR有助于快速、稳健地重建HR图像,改善临床MRI工作流程并推动医学成像研究。代码可在https://github.com/mosaf/Res-SRDiff获取。