• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于图像匹配的合成特征对数据集和连体卷积模型。

Synthetic feature pairs dataset and siamese convolutional model for image matching.

作者信息

Halmaoui Houssam, Haqiq Abdelkrim

机构信息

ISMAC - Higher Institute of Audiovisual and Film Professions, Rabat, Morocco.

Hassan First University of Settat, Faculty of Sciences and Techniques, Computer, Networks, Mobility and Modeling laboratory: IR2M, Settat 26000, Morocco.

出版信息

Data Brief. 2022 Feb 15;41:107965. doi: 10.1016/j.dib.2022.107965. eCollection 2022 Apr.

DOI:10.1016/j.dib.2022.107965
PMID:35242945
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8873551/
Abstract

In a previous publication [1], we created a dataset of feature patches for detection model training. In this paper, we use the same patches to create a new large synthetic dataset of feature pairs, similar and different, in order to perform, thanks to a siamese convolutional model, the description and matching of the detected features. We thus complete the entire matching pipeline. The accurate manual labeling of image features being very difficult because of their large number and the various associated parameters of position, scale and rotation, recent deep learning models use the result of handcrafted methods for training. Compared to existing datasets, ours avoids model training with false detections of the extraction of feature patches by other algorithms, or with inaccuracy errors of manual labeling. The other advantage of synthetic patches is that we can control their content (corners, edges, etc.), as well as their geometric and photometric parameters, and therefore we control the invariance of the model. The proposed datasets thus allow a new approach to train the different matching modules without using traditional methods. To our knowledge, these are the first feature datasets based on generated synthetic patches for image matching.

摘要

在之前的一篇论文[1]中,我们创建了一个用于检测模型训练的特征补丁数据集。在本文中,我们使用相同的补丁来创建一个新的大型合成特征对数据集,包括相似和不同的特征对,以便借助连体卷积模型对检测到的特征进行描述和匹配。这样我们就完成了整个匹配流程。由于图像特征数量众多且位置、尺度和旋转等相关参数各异,准确地手动标注图像特征非常困难,因此最近的深度学习模型使用手工方法的结果进行训练。与现有数据集相比,我们的数据集避免了使用其他算法提取特征补丁时的错误检测结果或手动标注的不准确误差来进行模型训练。合成补丁的另一个优点是我们可以控制其内容(角点、边缘等)以及几何和光度参数,从而控制模型的不变性。因此,所提出的数据集允许采用一种新的方法来训练不同的匹配模块,而无需使用传统方法。据我们所知,这些是基于生成的合成补丁用于图像匹配的首批特征数据集。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/a5260a69b9ec/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/3bc26eaf08d7/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/41366b44d730/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/8bfb86d74e45/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/aaef1b41f03f/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/2cd797b39265/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/a7be203425fe/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/606a143dcb09/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/106cf02552ef/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/8fc48b4d4024/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/a5260a69b9ec/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/3bc26eaf08d7/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/41366b44d730/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/8bfb86d74e45/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/aaef1b41f03f/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/2cd797b39265/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/a7be203425fe/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/606a143dcb09/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/106cf02552ef/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/8fc48b4d4024/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0286/8873551/a5260a69b9ec/gr8.jpg

相似文献

1
Synthetic feature pairs dataset and siamese convolutional model for image matching.用于图像匹配的合成特征对数据集和连体卷积模型。
Data Brief. 2022 Feb 15;41:107965. doi: 10.1016/j.dib.2022.107965. eCollection 2022 Apr.
2
Feature Interaction Learning Network for Cross-Spectral Image Patch Matching.用于跨光谱图像块匹配的特征交互学习网络
IEEE Trans Image Process. 2023;32:5564-5579. doi: 10.1109/TIP.2023.3313488. Epub 2023 Oct 10.
3
Robust Keypoint Detection and Matching on Fisheye Images by Self-Supervised Learning.基于自监督学习的鱼眼图像鲁棒关键点检测与匹配。
Comput Intell Neurosci. 2022 Dec 22;2022:4024774. doi: 10.1155/2022/4024774. eCollection 2022.
4
A method to detect landmark pairs accurately between intra-patient volumetric medical images.一种用于在患者内容积医学图像之间准确检测地标对的方法。
Med Phys. 2017 Nov;44(11):5859-5872. doi: 10.1002/mp.12526. Epub 2017 Sep 13.
5
Remote Sensing Image Ship Matching Utilising Line Features for Resource-Limited Satellites.利用线特征的资源受限卫星遥感图像舰船匹配
Sensors (Basel). 2023 Nov 28;23(23):9479. doi: 10.3390/s23239479.
6
Joint Detection and Matching of Feature Points in Multimodal Images.多模态图像中特征点的联合检测与匹配。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6585-6593. doi: 10.1109/TPAMI.2021.3092289. Epub 2022 Sep 14.
7
A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.一种使用域转移深度卷积神经网络的新型端到端生物医学图像分类器。
Comput Methods Programs Biomed. 2017 Mar;140:283-293. doi: 10.1016/j.cmpb.2016.12.019. Epub 2017 Jan 6.
8
DV-DCNN: Dual-view deep convolutional neural network for matching detected masses in mammograms.DV-DCNN:用于匹配乳腺X线摄影中检测到的肿块的双视图深度卷积神经网络。
Comput Methods Programs Biomed. 2021 Aug;207:106152. doi: 10.1016/j.cmpb.2021.106152. Epub 2021 May 11.
9
Element-Wise Feature Relation Learning Network for Cross-Spectral Image Patch Matching.
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3372-3386. doi: 10.1109/TNNLS.2021.3052756. Epub 2022 Aug 3.
10
SPA-Net: A Deep Learning Approach Enhanced Using a Span-Partial Structure and Attention Mechanism for Image Copy-Move Forgery Detection.SPA-Net:一种基于跨度局部结构和注意力机制增强的深度学习方法,用于图像复制移动伪造检测。
Sensors (Basel). 2023 Jul 15;23(14):6430. doi: 10.3390/s23146430.

本文引用的文献

1
Discriminative learning of local image descriptors.局部图像描述符的判别式学习。
IEEE Trans Pattern Anal Mach Intell. 2011 Jan;33(1):43-57. doi: 10.1109/TPAMI.2010.54.