Li Zizhuo, Ma Jiayi
IEEE Trans Image Process. 2024 Dec 11;PP. doi: 10.1109/TIP.2024.3512352.
Accurately matching local features between a pair of images corresponding to the same 3D scene is a challenging computer vision task. Previous studies typically utilize attention-based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images for visual and geometric information reasoning. However, in the background of local feature matching, a significant number of keypoints are non-repeatable due to factors like occlusion and failure of the detector, and thus irrelevant for message passing. The connectivity with non-repeatable keypoints not only introduces redundancy, resulting in limited efficiency (quadratic computational complexity w.r.t. the keypoint number), but also interferes with the representation aggregation process, leading to limited accuracy. Aiming at the best of both worlds on accuracy and efficiency, we propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide compact and meaningful message passing. More specifically, our Bilateral Context-Aware Sampling (BCAS) Module first dynamically samples two small sets of well-distributed keypoints with high matchability scores from the image pair. Then, our Matchable Keypoint-Assisted Context Aggregation (MKACA) Module regards sampled informative keypoints as message bottlenecks and thus constrains each keypoint only to retrieve favorable contextual information from intra- and inter-matchable keypoints, evading the interference of irrelevant and redundant connectivity with non-repeatable ones. Furthermore, considering the potential noise in initial keypoints and sampled matchable ones, the MKACA module adopts a matchability-guided attentional aggregation operation for purer data-dependent context propagation. By these means, MaKeGNN outperforms the state-of-the-arts on multiple highly challenging benchmarks, while significantly reducing computational and memory complexity compared to typical attentional GNNs.
准确匹配对应于同一3D场景的一对图像之间的局部特征是一项具有挑战性的计算机视觉任务。先前的研究通常利用基于注意力的图神经网络(GNN),在图像内/图像间的关键点上使用全连接图进行视觉和几何信息推理。然而,在局部特征匹配的背景下,由于遮挡和检测器故障等因素,大量关键点是不可重复的,因此与消息传递无关。与不可重复关键点的连接不仅会引入冗余,导致效率有限(计算复杂度与关键点数量呈二次关系),还会干扰表示聚合过程,导致准确性受限。为了在准确性和效率两方面都达到最佳效果,我们提出了MaKeGNN,这是一种基于稀疏注意力的GNN架构,它绕过不可重复的关键点,并利用可匹配的关键点来指导紧凑且有意义的消息传递。更具体地说,我们的双边上下文感知采样(BCAS)模块首先从图像对中动态采样两组分布良好、具有高匹配性分数的小关键点集。然后,我们的可匹配关键点辅助上下文聚合(MKACA)模块将采样的信息丰富的关键点视为消息瓶颈,从而约束每个关键点仅从可匹配的关键点内和可匹配的关键点间检索有利的上下文信息,避免与不可重复关键点的无关和冗余连接的干扰。此外,考虑到初始关键点和采样的可匹配关键点中的潜在噪声,MKACA模块采用了一种基于匹配性的注意力聚合操作,以实现更纯净的数据依赖上下文传播。通过这些方法,MaKeGNN在多个极具挑战性的基准测试中优于现有技术,同时与典型的注意力GNN相比,显著降低了计算和内存复杂度。