Suppr超能文献

一种用于增强语义对应中几何感知的深度感知与可学习特征融合网络。

A Depth Awareness and Learnable Feature Fusion Network for Enhanced Geometric Perception in Semantic Correspondence.

作者信息

Li Fazeng, Zou Chunlong, Yun Juntong, Huang Li, Liu Ying, Tao Bo, Xie Yuanmin

机构信息

Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China.

College of Mechanical Engineering, Hubei University of Automotive Technology, Shiyan 442000, China.

出版信息

Sensors (Basel). 2024 Oct 17;24(20):6680. doi: 10.3390/s24206680.

Abstract

Deep learning is becoming the most widely used technology for multi-sensor data fusion. Semantic correspondence has recently emerged as a foundational task, enabling a range of downstream applications, such as style or appearance transfer, robot manipulation, and pose estimation, through its ability to provide robust correspondence in RGB images with semantic information. However, current representations generated by self-supervised learning and generative models are often limited in their ability to capture and understand the geometric structure of objects, which is significant for matching the correct details in applications of semantic correspondence. Furthermore, efficiently fusing these two types of features presents an interesting challenge. Achieving harmonious integration of these features is crucial for improving the expressive power of models in various tasks. To tackle these issues, our key idea is to integrate depth information from depth estimation or depth sensors into feature maps and leverage learnable weights for feature fusion. First, depth information is used to model pixel-wise depth distributions, assigning relative depth weights to feature maps for perceiving an object's structural information. Then, based on a contrastive learning optimization objective, a series of weights are optimized to leverage feature maps from self-supervised learning and generative models. Depth features are naturally embedded into feature maps, guiding the network to learn geometric structure information about objects and alleviating depth ambiguity issues. Experiments on the SPair-71K and AP-10K datasets show that the proposed method achieves scores of 81.8 and 83.3 on the percentage of correct keypoints (PCK) at the 0.1 level, respectively. Our approach not only demonstrates significant advantages in experimental results but also introduces the depth awareness module and a learnable feature fusion module, which enhances the understanding of object structures through depth information and fully utilizes features from various pre-trained models, offering new possibilities for the application of deep learning in RGB and depth data fusion technologies. We will also continue to focus on accelerating model inference and optimizing model lightweighting, enabling our model to operate at a faster speed.

摘要

深度学习正成为多传感器数据融合中应用最广泛的技术。语义对应最近已成为一项基础任务,通过其在具有语义信息的RGB图像中提供稳健对应关系的能力,实现了一系列下游应用,如风格或外观迁移、机器人操纵和姿态估计。然而,目前由自监督学习和生成模型生成的表示在捕捉和理解物体几何结构方面的能力往往有限,这对于语义对应应用中匹配正确细节至关重要。此外,有效地融合这两种类型的特征带来了一个有趣的挑战。实现这些特征的和谐整合对于提高模型在各种任务中的表达能力至关重要。为了解决这些问题,我们的关键思想是将来自深度估计或深度传感器的深度信息整合到特征图中,并利用可学习权重进行特征融合。首先,深度信息用于对像素级深度分布进行建模,为特征图分配相对深度权重以感知物体的结构信息。然后,基于对比学习优化目标,优化一系列权重以利用来自自监督学习和生成模型的特征图。深度特征自然地嵌入到特征图中,引导网络学习物体的几何结构信息并缓解深度模糊问题。在SPair - 71K和AP - 10K数据集上的实验表明,所提出的方法在0.1水平的正确关键点百分比(PCK)上分别达到了81.8和83.3的分数。我们的方法不仅在实验结果中显示出显著优势,还引入了深度感知模块和可学习特征融合模块,通过深度信息增强了对物体结构的理解,并充分利用了各种预训练模型的特征,为深度学习在RGB和深度数据融合技术中的应用提供了新的可能性。我们还将继续专注于加速模型推理和优化模型轻量化,使我们的模型能够以更快的速度运行。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c4a/11511390/c2a974c6ba95/sensors-24-06680-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验