Suppr超能文献

ADE-CycleGAN:一种细节增强的图像去雾 CycleGAN 网络。

ADE-CycleGAN: A Detail Enhanced Image Dehazing CycleGAN Network.

机构信息

School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710065, China.

出版信息

Sensors (Basel). 2023 Mar 21;23(6):3294. doi: 10.3390/s23063294.

Abstract

The preservation of image details in the defogging process is still one key challenge in the field of deep learning. The network uses the generation of confrontation loss and cyclic consistency loss to ensure that the generated defog image is similar to the original image, but it cannot retain the details of the image. To this end, we propose a detail enhanced image CycleGAN to retain the detail information during the process of defogging. Firstly, the algorithm uses the CycleGAN network as the basic framework and combines the U-Net network's idea with this framework to extract visual information features in different spaces of the image in multiple parallel branches, and it introduces Dep residual blocks to learn deeper feature information. Secondly, a multi-head attention mechanism is introduced in the generator to strengthen the expressive ability of features and balance the deviation produced by the same attention mechanism. Finally, experiments are carried out on the public data set D-Hazy. Compared with the CycleGAN network, the network structure of this paper improves the SSIM and PSNR of the image dehazing effect by 12.2% and 8.1% compared with the network and can retain image dehazing details.

摘要

在深度学习领域,保留图像细节仍然是一个关键挑战。该网络使用生成对抗损失和循环一致性损失来确保生成的去雾图像与原始图像相似,但无法保留图像的细节。为此,我们提出了一种细节增强的图像 CycleGAN,以在去雾过程中保留细节信息。首先,该算法使用 CycleGAN 网络作为基本框架,并将 U-Net 网络的思想与该框架结合起来,在多个平行分支中提取图像不同空间的视觉信息特征,并引入 Dep 残差块来学习更深层次的特征信息。其次,在生成器中引入多头注意力机制,增强特征的表达能力,并平衡同一注意力机制产生的偏差。最后,在公共数据集 D-Hazy 上进行实验。与 CycleGAN 网络相比,本文的网络结构与网络相比,图像去雾效果的 SSIM 和 PSNR 分别提高了 12.2%和 8.1%,并且可以保留图像去雾细节。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5db3/10054719/09feb5c19f84/sensors-23-03294-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验