• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

目标和空间辨别能力使弱监督局部特征更优。

Object and spatial discrimination makes weakly supervised local feature better.

机构信息

School of Computer and Electronic Information, Guangxi University, Nanning, China.

School of Computer and Electronic Information, Guangxi University, Nanning, China; Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning, China.

出版信息

Neural Netw. 2024 Dec;180:106697. doi: 10.1016/j.neunet.2024.106697. Epub 2024 Sep 12.

DOI:10.1016/j.neunet.2024.106697
PMID:39305784
Abstract

Local feature extraction plays a crucial role in numerous critical visual tasks. However, there remains room for improvement in both descriptors and keypoints, particularly regarding the discriminative power of descriptors and the localization precision of keypoints. To address these challenges, this study introduces a novel local feature extraction pipeline named OSDFeat (Object and Spatial Discrimination Feature). OSDFeat employs a decoupling strategy, training descriptor and detection networks independently. Inspired by semantic correspondence, we propose an Object and Spatial Discrimination ResUNet (OSD-ResUNet). OSD-ResUNet captures features from the feature map that differentiate object appearance and spatial context, thus enhancing descriptor performance. To further improve the discriminative capability of descriptors, we propose a Discrimination Information Retained Normalization module (DIRN). DIRN complementarily integrates spatial-wise normalization and channel-wise normalization, yielding descriptors that are more distinguishable and informative. In the detection network, we propose a Cross Saliency Pooling module (CSP). CSP employs a cross-shaped kernel to aggregate long-range context in both vertical and horizontal dimensions. By enhancing the saliency of keypoints, CSP enables the detection network to effectively utilize descriptor information and achieve more precise localization of keypoints. Compared to the previous best local feature extraction methods, OSDFeat achieves Mean Matching Accuracy of 79.4% in local feature matching task, improving by 1.9% and achieving state-of-the-art results. Additionally, OSDFeat achieves competitive results in Visual Localization and 3D Reconstruction. The results of this study indicate that object and spatial discrimination can improve the accuracy and robustness of local feature, even in challenging environments. The code is available at https://github.com/pandaandyy/OSDFeat.

摘要

局部特征提取在许多关键视觉任务中起着至关重要的作用。然而,描述符和关键点在判别能力和关键点定位精度方面仍有改进的空间。为了解决这些挑战,本研究引入了一种名为 OSDFeat(对象和空间判别特征)的新的局部特征提取流水线。OSDFeat 采用解耦策略,独立训练描述符和检测网络。受语义对应关系的启发,我们提出了一种对象和空间判别 ResUNet(OSD-ResUNet)。OSD-ResUNet 从特征图中捕获区分对象外观和空间上下文的特征,从而提高描述符的性能。为了进一步提高描述符的判别能力,我们提出了一种判别信息保持归一化模块(DIRN)。DIRN 互补地集成了空间归一化和通道归一化,生成更具区分性和信息量的描述符。在检测网络中,我们提出了一种交叉显着性池化模块(CSP)。CSP 采用十字形核在垂直和水平两个维度上聚合远距离上下文。通过增强关键点的显着性,CSP 使检测网络能够有效地利用描述符信息,并实现更精确的关键点定位。与之前最好的局部特征提取方法相比,OSDFeat 在局部特征匹配任务中实现了 79.4%的平均匹配精度,提高了 1.9%,达到了最先进的水平。此外,OSDFeat 在视觉定位和 3D 重建方面也取得了有竞争力的结果。本研究的结果表明,对象和空间判别可以提高局部特征的准确性和鲁棒性,即使在具有挑战性的环境中也是如此。代码可在 https://github.com/pandaandyy/OSDFeat 上获得。

相似文献

1
Object and spatial discrimination makes weakly supervised local feature better.目标和空间辨别能力使弱监督局部特征更优。
Neural Netw. 2024 Dec;180:106697. doi: 10.1016/j.neunet.2024.106697. Epub 2024 Sep 12.
2
Accuracy and efficiency stereo matching network with adaptive feature modulation.具有自适应特征调制的精确高效立体匹配网络
PLoS One. 2024 Apr 25;19(4):e0301093. doi: 10.1371/journal.pone.0301093. eCollection 2024.
3
Learning Semantic-Aware Local Features for Long Term Visual Localization.学习用于长期视觉定位的语义感知局部特征。
IEEE Trans Image Process. 2022;31:4842-4855. doi: 10.1109/TIP.2022.3187565. Epub 2022 Jul 20.
4
Attention Weighted Local Descriptors.
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):10632-10649. doi: 10.1109/TPAMI.2023.3266728. Epub 2023 Aug 7.
5
Decoupled Unbiased Teacher for Source-Free Domain Adaptive Medical Object Detection.无监督源域自适应医学目标检测的解耦无偏教师。
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7287-7298. doi: 10.1109/TNNLS.2023.3272389. Epub 2024 Jun 3.
6
USB: ultrashort binary descriptor for fast visual matching and retrieval.USB:超短二进制描述符,用于快速视觉匹配和检索。
IEEE Trans Image Process. 2014 Aug;23(8):3671-83. doi: 10.1109/TIP.2014.2330794. Epub 2014 Jun 12.
7
Centralized contrastive loss with weakly supervised progressive feature extraction for fine-grained common thorax disease retrieval in chest x-ray.基于集中对比损失和弱监督渐进式特征提取的胸部 X 射线细粒度常见胸部疾病检索方法。
Med Phys. 2023 Jun;50(6):3560-3572. doi: 10.1002/mp.16144. Epub 2023 Jan 11.
8
Small object detection algorithm incorporating swin transformer for tea buds.用于茶芽的融合 Swin 变换小目标检测算法。
PLoS One. 2024 Mar 21;19(3):e0299902. doi: 10.1371/journal.pone.0299902. eCollection 2024.
9
Performance Evaluation of State-of-the-Art Local Feature Detectors and Descriptors in the Context of Longitudinal Registration of Retinal Images.基于视网膜图像纵向配准的最新局部特征检测器和描述符的性能评估。
J Med Syst. 2018 Feb 17;42(4):57. doi: 10.1007/s10916-018-0911-z.
10
An Appearance-Semantic Descriptor with Coarse-to-Fine Matching for Robust VPR.一种具有从粗到细匹配的外观语义描述符用于鲁棒视觉位置识别
Sensors (Basel). 2024 Mar 29;24(7):2203. doi: 10.3390/s24072203.