Suppr超能文献

基于各向异性扩散辅助生成对抗网络的高质量MRI重建及其多模态图像扩展

Towards High-Quality MRI Reconstruction With Anisotropic Diffusion-Assisted Generative Adversarial Networks and Its Multi-Modal Images Extension.

作者信息

Luo Yuyang, Wu Gengshen, Liu Yi, Liu Wenjian, Han Jungong

出版信息

IEEE J Biomed Health Inform. 2025 May;29(5):3098-3111. doi: 10.1109/JBHI.2024.3436714. Epub 2025 May 6.

Abstract

Recently, fast Magnetic Resonance Imaging reconstruction technology has emerged as a promising way to improve the clinical diagnostic experience by significantly reducing scan times. While existing studies have used Generative Adversarial Networks to achieve impressive results in reconstructing MR images, they still suffer from challenges such as blurred zones/boundaries and abnormal spots caused by inevitable noise in the reconstruction process. To this end, we propose a novel deep framework termed Anisotropic Diffusion-Assisted Generative Adversarial Networks, which aims to maximally preserve valid high-frequency information and structural details while minimizing noises in reconstructed images by optimizing a joint loss function in a unified framework. In doing so, it enables more authentic and accurate MR image generation. To specifically handle unforeseeable noises, an Anisotropic Diffused Reconstruction Module is developed and added aside the backbone network as a denoise assistant, which improves the final image quality by minimizing reconstruction losses between targets and iteratively denoised generative outputs with no extra computational complexity during the testing phase. To make the most of valuable MRI data, we extend its application to support multi-modal learning to boost reconstructed image quality by aggregating more valid information from images of diverse modalities. Extensive experiments on public datasets show that the proposed framework can achieve superior performance in polishing up the quality of reconstructed MR images. For example, the proposed method obtains average PSNR and mSSIM values of 35.785 dB and 0.9765 on the MRNet dataset, which are at least about 2.9 dB and 0.07 higher than those from the baselines.

摘要

最近,快速磁共振成像重建技术已成为一种有前景的方法,可通过显著缩短扫描时间来改善临床诊断体验。虽然现有研究已使用生成对抗网络在重建磁共振图像方面取得了令人瞩目的成果,但它们仍面临诸如重建过程中不可避免的噪声导致的模糊区域/边界和异常斑点等挑战。为此,我们提出了一种名为各向异性扩散辅助生成对抗网络的新型深度框架,其旨在通过在统一框架中优化联合损失函数,在最大限度地保留有效高频信息和结构细节的同时,最小化重建图像中的噪声。这样做能够实现更真实、准确的磁共振图像生成。为了专门处理不可预见的噪声,我们开发了一个各向异性扩散重建模块,并将其作为去噪助手添加到主干网络旁边,该模块通过最小化目标与在测试阶段无额外计算复杂度的迭代去噪生成输出之间的重建损失来提高最终图像质量。为了充分利用有价值的磁共振成像数据,我们扩展其应用以支持多模态学习,通过聚合来自不同模态图像的更多有效信息来提升重建图像质量。在公共数据集上进行的大量实验表明,所提出的框架在提升重建磁共振图像质量方面能够实现卓越性能。例如,所提出的方法在MRNet数据集上获得的平均PSNR和mSSIM值分别为35.785 dB和0.9765,比基线方法至少高出约2.9 dB和0.07。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验