Wang Hong, Xie Qi, Zhao Qian, Li Yuexiang, Liang Yong, Zheng Yefeng, Meng Deyu
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8668-8682. doi: 10.1109/TNNLS.2022.3231453. Epub 2024 Jun 3.
As common weather, rain streaks adversely degrade the image quality and tend to negatively affect the performance of outdoor computer vision systems. Hence, removing rains from an image has become an important issue in the field. To handle such an ill-posed single image deraining task, in this article, we specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks and has clear interpretability. In specific, we first establish a rain convolutional dictionary (RCD) model for representing rain streaks and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. By unfolding it, we then build the RCDNet in which every network module has clear physical meanings and corresponds to each operation involved in the algorithm. This good interpretability greatly facilitates an easy visualization and analysis of what happens inside the network and why it works well in the inference process. Moreover, taking into account the domain gap issue in real scenarios, we further design a novel dynamic RCDNet, where the rain kernels can be dynamically inferred corresponding to input rainy images and then help shrink the space for rain layer estimation with few rain maps, so as to ensure a fine generalization performance in the inconsistent scenarios of rain types between training and testing data. By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers and, thus, naturally leading to better deraining performance. Comprehensive experiments implemented on a series of representative synthetic and real datasets substantiate the superiority of our method, especially on its well generality to diverse testing scenarios and good interpretability for all its modules, compared with state-of-the-art single image derainers both visually and quantitatively. Code is available at https://github.com/hongwang01/DRCDNet.
作为常见天气,雨痕会对图像质量产生不利影响,并往往会对户外计算机视觉系统的性能产生负面影响。因此,去除图像中的雨已成为该领域的一个重要问题。为了处理这种不适定的单图像去雨任务,在本文中,我们专门构建了一种新颖的深度架构,称为雨卷积字典网络(RCDNet),它嵌入了雨痕的内在先验知识,具有清晰的可解释性。具体来说,我们首先建立一个用于表示雨痕的雨卷积字典(RCD)模型,并利用近端梯度下降技术设计一种仅包含简单算子的迭代算法来求解该模型。通过展开它,我们构建了RCDNet,其中每个网络模块都有明确的物理意义,并对应于算法中涉及的每个操作。这种良好的可解释性极大地便于对网络内部发生的情况以及它在推理过程中为何表现良好进行可视化和分析。此外,考虑到实际场景中的域差距问题,我们进一步设计了一种新颖的动态RCDNet,其中雨核可以根据输入的下雨图像动态推断,然后用少量雨图帮助缩小雨层估计的空间,从而确保在训练和测试数据之间雨类型不一致的场景中具有良好的泛化性能。通过对这样一个可解释的网络进行端到端训练,可以自动提取所有涉及的雨核和近端算子,忠实地表征雨层和干净背景层的特征,从而自然地带来更好的去雨性能。在一系列具有代表性的合成和真实数据集上进行的综合实验证实了我们方法的优越性,特别是与现有单图像去雨方法相比,在视觉和定量方面,我们的方法对各种测试场景具有良好的通用性,并且其所有模块都具有良好的可解释性。代码可在https://github.com/hongwang01/DRCDNet获取。