Suppr超能文献

用于医学图像超分辨率的卷积神经网络-Transformer门控融合网络

CNN-Transformer gated fusion network for medical image super-resolution.

作者信息

Qin Juanjuan, Xiong Jian, Liang Zhantu

机构信息

Department of Artificial Intelligence and Data Science, Guangzhou Xinhua University, Dongguan, 523133, Guangdong, China.

出版信息

Sci Rep. 2025 May 2;15(1):15338. doi: 10.1038/s41598-025-00119-x.

Abstract

To solve the problems of image detail blurring and insufficient utilization of global information in the existing medical image super-resolution reconstruction, this paper proposes a dual-branch fusion network based on residual Transformer network and dynamic convolutional neural network (CTGFSR). The network consists of two branches, one is the global branch based on residual Transformer network, and the other is the local branch based on dynamic convolutional neural network. The global branch uses the self-attention mechanism of Transformer network, which can effectively mine the large-scale global information in the image and improve the overall quality of the image. The local branch uses the characteristic of dynamic convolution to adaptively adjust the convolution kernel parameters, which can enhance the feature extraction ability of convolutional neural network for multi-scale information and improve the detail restoration ability of the image without significantly increasing the network model size. The network uses residual skip connections to preserve the detail information in medical image super-resolution reconstruction. Finally, through the bidirectional gated attention mechanism, the two branches are fused to obtain the final super-resolution reconstruction image. This paper evaluates the performance of the network on two medical image datasets, namely ACDC abdominal MR related to medical image segmentation and L2R2022 lung CT related to registration. The experimental results show that compared with the mainstream super-resolution algorithms, CTGFSR has better overall performance. When the magnification factor is 2 or 4, compared with the convolutional neural network based CFIPC, PDCNCF, ESPCN, FSRCNN, VDSR and the Transformer network based ESRT, SwinIR, the structural similarity SSIM and peak signal-to-noise ratio PSNR have a certain improvement.

摘要

为解决现有医学图像超分辨率重建中图像细节模糊和全局信息利用不足的问题,本文提出了一种基于残差Transformer网络和动态卷积神经网络的双分支融合网络(CTGFSR)。该网络由两个分支组成,一个是基于残差Transformer网络的全局分支,另一个是基于动态卷积神经网络的局部分支。全局分支利用Transformer网络的自注意力机制,能够有效挖掘图像中的大规模全局信息,提高图像的整体质量。局部分支利用动态卷积的特性自适应调整卷积核参数,能够增强卷积神经网络对多尺度信息的特征提取能力,在不显著增加网络模型规模的情况下提高图像的细节恢复能力。该网络使用残差跳跃连接来保留医学图像超分辨率重建中的细节信息。最后,通过双向门控注意力机制,将两个分支融合以获得最终的超分辨率重建图像。本文在两个医学图像数据集上评估了该网络的性能,即与医学图像分割相关的ACDC腹部磁共振成像和与配准相关的L2R2022肺部CT。实验结果表明,与主流超分辨率算法相比,CTGFSR具有更好的整体性能。当放大倍数为2或4时,与基于卷积神经网络的CFIPC、PDCNCF、ESPCN、FSRCNN、VDSR以及基于Transformer网络的ESRT、SwinIR相比,结构相似性SSIM和峰值信噪比PSNR有一定提高。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef74/12048642/664753f7ac2c/41598_2025_119_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验