Suppr超能文献

在生成对抗网络中融合残差密集模块与注意力机制用于医学图像超分辨率

Fusing residual dense and attention in generative adversarial networks for super-resolution of medical images.

作者信息

Zhang Qiong, Min Byungwon, Hang Yiliu, Chen Hao, Qiu Jianlin

机构信息

College of Yonyou Digital & Intelligence, Nantong Institute of Technology, Nantong, China.

Division of Information and Communication Convergence Engineering, Mokwon University, Daejeon, Korea.

出版信息

Quant Imaging Med Surg. 2025 Jun 6;15(6):5047-5059. doi: 10.21037/qims-2025-3. Epub 2025 Jun 3.

Abstract

BACKGROUND

The resolution of clinical medical images may be reduced due to differences in the radiation dose, sampling equipment, and storage methods. Low-resolution (LR) medical images blur lesion feature information and affect the diagnostic accuracy of clinicians. To address this issue, we proposed the fusing residual dense and attention in generative adversarial network (FRDAGAN) for the super-resolution (SR) of medical images.

METHODS

The FRDAGAN method is based on the generative adversarial network (GAN). First, the feature information of different layers is fully utilized by residual dense blocks to prevent gradient decay. Second, the attention gate (AG) network is used to suppress the noise information and improve the signal-to-noise ratio. Finally, a hybrid loss function is used to prevent vanishing or exploding gradients in the network.

RESULTS

On the Luna16 dataset, using single images, the quantitative SR results, peak signal-to-noise ratio (PSNR), and mean structural similarity index measure (SSIM) were 31.257/0.964, 33.558/0.968, and 34.201/0.882, respectively. While on the brain magnetic resonance imaging (MRI) dataset, those values were 34.220/0.874, 35.735/0.885, and 35.854/0.908, respectively. The proposed method showed obvious enhancement compared to the other methods on the Luna16 and brain MRI test data sets, and the PSNR and SSIM values reached 33.005±0.157, 0.938±0.028, and 35.270±0.183, and 0.889±0.024, respectively. The uniform resource locator (URL) for Luna16 is https://luna16.grand-challenge.org/Download/, and the URL for brain MRI is https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation.

CONCLUSIONS

The FRDAGAN method had better results in terms of the PSNR and SSIM than the other traditional methods, and was more stable in terms of the faster convergence of the loss function. The results showed that the FRDAGAN method is effective and advanced.

摘要

背景

由于辐射剂量、采样设备和存储方法的差异,临床医学图像的分辨率可能会降低。低分辨率(LR)医学图像会模糊病变特征信息,影响临床医生的诊断准确性。为了解决这个问题,我们提出了用于医学图像超分辨率(SR)的生成对抗网络中融合残差密集和注意力的方法(FRDAGAN)。

方法

FRDAGAN方法基于生成对抗网络(GAN)。首先,通过残差密集块充分利用不同层的特征信息,以防止梯度衰减。其次,使用注意力门(AG)网络抑制噪声信息并提高信噪比。最后,使用混合损失函数防止网络中梯度消失或爆炸。

结果

在Luna16数据集上,使用单幅图像时,定量超分辨率结果、峰值信噪比(PSNR)和平均结构相似性指数测量(SSIM)分别为31.257/0.964、33.558/0.968和34.201/0.882。而在脑磁共振成像(MRI)数据集上,这些值分别为34.220/0.874、35.735/0.885和35.854/0.908。在Luna16和脑MRI测试数据集上,与其他方法相比,所提出的方法显示出明显的增强,PSNR和SSIM值分别达到33.005±0.157、0.938±0.028以及35.270±0.183和0.889±0.024。Luna16的统一资源定位器(URL)为https://luna16.grand-challenge.org/Download/,脑MRI的URL为https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation。

结论

FRDAGAN方法在PSNR和SSIM方面比其他传统方法有更好的结果,并且在损失函数更快收敛方面更稳定。结果表明FRDAGAN方法是有效且先进的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6fa/12209655/a71eca1bdc0f/qims-15-06-5047-f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验