School of Convergence & Fusion System Engineering, Kyungpook National University, Sangju 37224, Korea.
Department of Civil Engineering, Korea Maritime and Ocean University, Busan 49112, Korea.
Sensors (Basel). 2018 May 17;18(5):1599. doi: 10.3390/s18051599.
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
对于使用超高分辨率 (VHR) 多时相卫星图像的时间序列分析,需要对地图坐标进行精确的地理配准,并对图像进行亚像素级配准。然而,应用知名的匹配方法,如尺度不变特征变换和加速稳健特征,对于 VHR 多时相图像存在一定的局限性。首先,它们不能用于将光学图像与用于地理配准的非光学异类数据进行匹配。其次,它们会产生由采集条件差异引起的局部失准,例如采集平台稳定性、传感器的离轴角以及考虑场景的地形位移等。因此,本研究通过提出一种用于从 VHR 光学卫星传感器获取的全场景多时相图像的自动地理/配准框架来解决该问题。所提出的方法包含两个主要步骤:(1)全局地理配准过程,然后是(2)精细配准过程。在第一步中,将二维多时相卫星图像与三维地形地图匹配,以分配地图坐标。在第二步中,对已映射到地图坐标的多时相图像之间的注册噪声像素进行局部分析,以提取大量分布均匀的对应点(CP)。最后,使用这些 CP 构建非刚性变换函数,以最小化图像之间存在的局部失准。在五个 Kompsat-3 全场景上进行的实验验证了所提出框架的有效性,结果表明,对于大多数场景,地理配准性能达到了大约像素级的精度,而配准性能通过增加计算的互相关值,进一步提高了所有已地理配准的 Kompsat-3 图像对之间的结果。