Jiang Xin, Zhao Chunlei, Zhu Ming, Hao Zhicheng, Gao Wen
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
University of Chinese Academy of Sciences, Beijing 100049, China.
Sensors (Basel). 2021 Nov 27;21(23):7922. doi: 10.3390/s21237922.
Single image dehazing is a highly challenging ill-posed problem. Existing methods including both prior-based and learning-based heavily rely on the conceptual simplified atmospheric scattering model by estimating the so-called medium transmission map and atmospheric light. However, the formation of haze in the real world is much more complicated and inaccurate estimations further degrade the dehazing performance with color distortion, artifacts and insufficient haze removal. Moreover, most dehazing networks treat spatial-wise and channel-wise features equally, but haze is practically unevenly distributed across an image, thus regions with different haze concentrations require different attentions. To solve these problems, we propose an end-to-end trainable densely connected residual spatial and channel attention network based on the conditional generative adversarial framework to directly restore a haze-free image from an input hazy image, without explicitly estimation of any atmospheric scattering parameters. Specifically, a novel residual attention module is proposed by combining spatial attention and channel attention mechanism, which could adaptively recalibrate spatial-wise and channel-wise feature weights by considering interdependencies among spatial and channel information. Such a mechanism allows the network to concentrate on more useful pixels and channels. Meanwhile, the dense network can maximize the information flow along features from different levels to encourage feature reuse and strengthen feature propagation. In addition, the network is trained with a multi-loss function, in which contrastive loss and registration loss are novel refined to restore sharper structures and ensure better visual quality. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.
单图像去雾是一个极具挑战性的不适定问题。现有的方法,包括基于先验和基于学习的方法,都严重依赖概念上简化的大气散射模型,通过估计所谓的介质传输图和大气光来进行去雾。然而,现实世界中雾的形成要复杂得多,不准确的估计会进一步降低去雾性能,导致颜色失真、伪影和去雾不彻底。此外,大多数去雾网络对空间特征和通道特征一视同仁,但雾在图像上的分布实际上是不均匀的,因此不同雾浓度的区域需要不同的关注。为了解决这些问题,我们提出了一种基于条件生成对抗框架的端到端可训练密集连接残差空间和通道注意力网络,直接从输入的有雾图像中恢复无雾图像,而无需显式估计任何大气散射参数。具体来说,通过结合空间注意力和通道注意力机制,提出了一种新颖的残差注意力模块,该模块可以通过考虑空间和通道信息之间的相互依赖关系,自适应地重新校准空间特征权重和通道特征权重。这种机制使网络能够专注于更有用的像素和通道。同时,密集网络可以最大化沿不同层次特征的信息流,以促进特征重用并加强特征传播。此外,该网络使用多损失函数进行训练,其中对比损失和配准损失经过了新颖的改进,以恢复更清晰的结构并确保更好的视觉质量。实验结果表明,该方法在公共合成数据集和真实世界图像上均取得了领先的性能,去雾结果在视觉上更令人满意。