Halmaoui Houssam, Haqiq Abdelkrim
ISMAC - Higher Institute of Audiovisual and Film Professions, Rabat, Morocco.
Hassan First University of Settat, Faculty of Sciences and Techniques, Computer, Networks, Mobility and Modeling laboratory: IR2M, Settat 26000, Morocco.
Data Brief. 2022 Feb 15;41:107965. doi: 10.1016/j.dib.2022.107965. eCollection 2022 Apr.
In a previous publication [1], we created a dataset of feature patches for detection model training. In this paper, we use the same patches to create a new large synthetic dataset of feature pairs, similar and different, in order to perform, thanks to a siamese convolutional model, the description and matching of the detected features. We thus complete the entire matching pipeline. The accurate manual labeling of image features being very difficult because of their large number and the various associated parameters of position, scale and rotation, recent deep learning models use the result of handcrafted methods for training. Compared to existing datasets, ours avoids model training with false detections of the extraction of feature patches by other algorithms, or with inaccuracy errors of manual labeling. The other advantage of synthetic patches is that we can control their content (corners, edges, etc.), as well as their geometric and photometric parameters, and therefore we control the invariance of the model. The proposed datasets thus allow a new approach to train the different matching modules without using traditional methods. To our knowledge, these are the first feature datasets based on generated synthetic patches for image matching.
在之前的一篇论文[1]中,我们创建了一个用于检测模型训练的特征补丁数据集。在本文中,我们使用相同的补丁来创建一个新的大型合成特征对数据集,包括相似和不同的特征对,以便借助连体卷积模型对检测到的特征进行描述和匹配。这样我们就完成了整个匹配流程。由于图像特征数量众多且位置、尺度和旋转等相关参数各异,准确地手动标注图像特征非常困难,因此最近的深度学习模型使用手工方法的结果进行训练。与现有数据集相比,我们的数据集避免了使用其他算法提取特征补丁时的错误检测结果或手动标注的不准确误差来进行模型训练。合成补丁的另一个优点是我们可以控制其内容(角点、边缘等)以及几何和光度参数,从而控制模型的不变性。因此,所提出的数据集允许采用一种新的方法来训练不同的匹配模块,而无需使用传统方法。据我们所知,这些是基于生成的合成补丁用于图像匹配的首批特征数据集。