Chen Zixuan, He Zewei, Lu Zhe-Ming
IEEE Trans Image Process. 2024;33:1002-1015. doi: 10.1109/TIP.2024.3354108. Epub 2024 Jan 26.
Single image dehazing is a challenging ill-posed problem which estimates latent haze-free images from observed hazy images. Some existing deep learning based methods are devoted to improving the model performance via increasing the depth or width of convolution. The learning ability of Convolutional Neural Network (CNN) structure is still under-explored. In this paper, a Detail-Enhanced Attention Block (DEAB) consisting of Detail-Enhanced Convolution (DEConv) and Content-Guided Attention (CGA) is proposed to boost the feature learning for improving the dehazing performance. Specifically, the DEConv contains difference convolutions which can integrate prior information to complement the vanilla one and enhance the representation capacity. Then by using the re-parameterization technique, DEConv is equivalently converted into a vanilla convolution to reduce parameters and computational cost. By assigning the unique Spatial Importance Map (SIM) to every channel, CGA can attend more useful information encoded in features. In addition, a CGA-based mixup fusion scheme is presented to effectively fuse the features and aid the gradient flow. By combining above mentioned components, we propose our Detail-Enhanced Attention Network (DEA-Net) for recovering high-quality haze-free images. Extensive experimental results demonstrate the effectiveness of our DEA-Net, outperforming the state-of-the-art (SOTA) methods by boosting the PSNR index over 41 dB with only 3.653 M parameters. (The source code of our DEA-Net is available at https://github.com/cecret3350/DEA-Net.).
单图像去雾是一个具有挑战性的不适定问题,它要从观测到的模糊图像中估计潜在的无雾图像。一些现有的基于深度学习的方法致力于通过增加卷积的深度或宽度来提高模型性能。卷积神经网络(CNN)结构的学习能力仍有待深入探索。本文提出了一种由细节增强卷积(DEConv)和内容引导注意力(CGA)组成的细节增强注意力块(DEAB),以促进特征学习,从而提高去雾性能。具体来说,DEConv包含差分卷积,它可以整合先验信息来补充普通卷积,并增强表示能力。然后通过使用重新参数化技术,将DEConv等效转换为普通卷积,以减少参数和计算成本。通过为每个通道分配唯一的空间重要性图(SIM),CGA可以关注特征中编码的更多有用信息。此外,还提出了一种基于CGA的混合融合方案,以有效地融合特征并辅助梯度流动。通过结合上述组件,我们提出了用于恢复高质量无雾图像的细节增强注意力网络(DEA-Net)。大量实验结果证明了我们的DEA-Net的有效性,在仅365.3万个参数的情况下,通过将PSNR指标提高到41 dB以上,优于当前最先进(SOTA)的方法。(我们的DEA-Net的源代码可在https://github.com/cecret3350/DEA-Net获取。)