Suppr超能文献

一种用于医学图像超分辨率的新型生成对抗网络。

A new generative adversarial network for medical images super resolution.

机构信息

Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan.

National Center of Artificial Intelligence, Peshawar, Pakistan.

出版信息

Sci Rep. 2022 Jun 9;12(1):9533. doi: 10.1038/s41598-022-13658-4.

Abstract

For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and costly as it requires sophisticated and expensive instruments, trained human resources, and often causes operation delays. Deep learning based super resolution techniques can help us to extract rich details from a low-resolution image acquired using the existing devices. In this paper, we propose a new Generative Adversarial Network (GAN) based architecture for medical images, which maps low-resolution medical images to high-resolution images. The proposed architecture is divided into three steps. In the first step, we use a multi-path architecture to extract shallow features on multiple scales instead of single scale. In the second step, we use a ResNet34 architecture to extract deep features and upscale the features map by a factor of two. In the third step, we extract features of the upscaled version of the image using a residual connection-based mini-CNN and again upscale the feature map by a factor of two. The progressive upscaling overcomes the limitation for previous methods in generating true colors. Finally, we use a reconstruction convolutional layer to map back the upscaled features to a high-resolution image. Our addition of an extra loss term helps in overcoming large errors, thus, generating more realistic and smooth images. We evaluate the proposed architecture on four different medical image modalities: (1) the DRIVE and STARE datasets of retinal fundoscopy images, (2) the BraTS dataset of brain MRI, (3) the ISIC skin cancer dataset of dermoscopy images, and (4) the CAMUS dataset of cardiac ultrasound images. The proposed architecture achieves superior accuracy compared to other state-of-the-art super-resolution architectures.

摘要

对于医学图像分析,图像中总是需要丰富的细节。通常,如果保留图像中的细微细节并且图像以高分辨率提供,则诊断将得到最佳服务。在医学成像中,获取高分辨率图像是具有挑战性和昂贵的,因为它需要复杂和昂贵的仪器、经过培训的人力资源,并且通常会导致操作延迟。基于深度学习的超分辨率技术可以帮助我们从使用现有设备获取的低分辨率图像中提取丰富的细节。在本文中,我们提出了一种用于医学图像的新的基于生成对抗网络 (GAN) 的架构,该架构将低分辨率医学图像映射到高分辨率图像。所提出的架构分为三个步骤。在第一步中,我们使用多路径架构在多个尺度上提取浅层特征,而不是单一尺度。在第二步中,我们使用 ResNet34 架构提取深层特征,并将特征图的比例提高两倍。在第三步中,我们使用基于残差连接的 mini-CNN 提取上采样图像的特征,并再次将特征图的比例提高两倍。逐步上采样克服了以前的方法在生成真实颜色方面的局限性。最后,我们使用重建卷积层将上采样的特征映射回高分辨率图像。我们添加的额外损失项有助于克服大误差,从而生成更真实和更平滑的图像。我们在四个不同的医学图像模态上评估了所提出的架构:(1)视网膜眼底图像的 DRIVE 和 STARE 数据集,(2)脑 MRI 的 BraTS 数据集,(3)皮肤镜图像的 ISIC 皮肤癌数据集,(4)心脏超声图像的 CAMUS 数据集。与其他最先进的超分辨率架构相比,所提出的架构具有更高的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1c48/9184641/d746a9366ca6/41598_2022_13658_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验