Chen Jiaxuan, Chen Shuang, Chen Xiaoxian, Yang Yang, Rao Yujing
IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3284-3298. doi: 10.1109/TNNLS.2021.3120768. Epub 2023 Jul 6.
Seeking good correspondences between two images is a fundamental and challenging problem in the remote sensing (RS) community, and it is a critical prerequisite in a wide range of feature-based visual tasks. In this article, we propose a flexible and general deep state learning network for both rigid and nonrigid feature matching, which provides a mechanism to change the state of matches into latent canonical forms, thereby weakening the degree of randomness in matching patterns. Different from the current conventional strategies (i.e., imposing a global geometric constraint or designing additional handcrafted descriptor), the proposed StateNet is designed to perform alternating two steps: 1) recalibrates matchwise feature responses in the spatial domain and 2) leverages the spatially local correlation across two sets of feature points for transformation update. For this purpose, our network contains two novel operations: adaptive dual-aggregation convolution (ADAConv) and point rendering layer (PRL). These two operations are differentiable, so our network can be inserted into the existing classification architecture to reduce the cost of establishing reliable correspondences. To demonstrate the robustness and universality of our approach, extensive experiments on various real image pairs for feature matching are conducted. Experiments reveal the superiority of our StateNet significantly over the state-of-the-art alternatives.
在遥感(RS)领域,寻找两幅图像之间的良好对应关系是一个基本且具有挑战性的问题,并且在广泛的基于特征的视觉任务中这是一个关键前提。在本文中,我们提出了一种灵活通用的深度状态学习网络,用于刚性和非刚性特征匹配,它提供了一种将匹配状态转换为潜在规范形式的机制,从而削弱匹配模式中的随机程度。与当前的传统策略(即施加全局几何约束或设计额外的手工描述符)不同,所提出的StateNet旨在执行交替的两个步骤:1)在空间域中重新校准逐匹配特征响应;2)利用两组特征点之间的空间局部相关性进行变换更新。为此,我们的网络包含两个新颖的操作:自适应双聚合卷积(ADAConv)和点渲染层(PRL)。这两个操作是可微的,因此我们的网络可以插入到现有的分类架构中,以降低建立可靠对应关系的成本。为了证明我们方法的鲁棒性和通用性,我们对各种用于特征匹配的真实图像对进行了广泛实验。实验表明,我们的StateNet明显优于当前的替代方法。