Suppr超能文献

双 U-Net 残差网络在心脏磁共振图像超分辨率中的应用。

Dual U-Net residual networks for cardiac magnetic resonance images super-resolution.

机构信息

Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.

Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.

出版信息

Comput Methods Programs Biomed. 2022 May;218:106707. doi: 10.1016/j.cmpb.2022.106707. Epub 2022 Feb 23.

Abstract

BACKGROUND AND OBJECTIVE

Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image.

METHODS

To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images.

RESULTS

When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm.

CONCLUSION

Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.

摘要

背景与目的

心脏病是一种严重威胁人类健康的致命疾病,是人类生命的头号杀手。此外,加上最近健康因素的影响,其发病率一直呈上升趋势。如今,心脏磁共振(CMR)成像可以为心脏提供全方位的结构和功能信息,已成为心脏病诊断和治疗的重要工具。因此,提高 CMR 的图像分辨率对于心脏病的诊断和病情评估具有重要的医学价值。目前,大多数单图像超分辨率(SISR)重建方法都存在一些严重的问题,如特征信息挖掘不足、难以确定特征图各通道的依赖性以及重建高分辨率图像时的重建误差。

方法

为了解决这些问题,我们提出并实现了一种用于 CMR 图像超分辨率的双 U-Net 残差网络(DURN)。具体来说,我们首先提出了一个 U-Net 残差网络(URN)模型,它分为上分支和下分支。上分支由残差块和上块组成,用于提取和上采样深度特征;下分支由残差块和下块组成,用于提取和下采样深度特征。基于 URN 模型,我们采用了一种双 U-Net 残差网络(DURN)模型,该模型通过残差连接将第一个 URN 和第二个 URN 中同一位置提取的深度特征进行组合。它可以充分利用第一个 URN 提取的特征来提取低分辨率图像更深层次的特征。

结果

当尺度因子为 2、3 和 4 时,我们的 DURN 在 Set5 数据集上分别获得 37.86dB、33.96dB 和 31.65dB,这表明(i)与双三次插值算法相比,分别最大提高了 4.17dB、3.55dB 和 3.22dB,(ii)与 LapSRN 算法相比,分别最小提高了 0.34dB、0.14dB 和 0.11dB。

结论

在基准数据集上的综合实验研究结果表明,我们提出的 DURN 不仅可以实现比其他最先进的 SR 图像算法更好的峰值信噪比(PSNR)和结构相似性指数(SSIM)值,而且可以重建更清晰的超分辨率 CMR 图像,这些图像具有更丰富的细节、边缘和纹理。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验