Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
Comput Methods Programs Biomed. 2020 Jul;190:105370. doi: 10.1016/j.cmpb.2020.105370. Epub 2020 Jan 29.
Feature matching of endoscopic images is of crucial importance in many clinical applications, such as object tracking and surface reconstruction. However, with the presence of low texture, specular reflections and deformations, the feature matching methods of natural scene are facing great challenges in minimally invasive surgery (MIS) scenarios. We propose a novel motion consensus-based method for endoscopic image feature matching to address these problems.
Our method starts by correcting the radial distortion with the spherical projection model and removing the specular reflection regions with an adaptive detection method, which helps to eliminate the image distortion and to reduce the quantity of outliers. We solve the matching problem with a two-stage strategy that progressively estimates a consensus of inliers; the result is a precisely smoothed motion field. First, we construct a spatial motion field from candidate feature matches and estimate its maximum posterior with expectation maximization algorithm, which is computationally efficient and able to obtain smoothed motion field quickly. Second, we extend the smoothed motion field to the affine domain and refine it with bilateral regression to preserve locally subtle motions. The true matches can be identified by checking the difference of feature motion against the estimated field.
Evaluations are implemented on two simulation datasets of deformation (218 images) and four different types of endoscopic datasets (1032 images). Our method is compared with three other state-of-the-art methods and achieves the best performance on affine transformation and nonrigid deformation simulations, with inlier ratio of 86.7% and 94.3%, sensitivity of 90.0% and 96.2%, precision of 88.2% and 93.9%, and F1-Score of 89.1% and 95.0%, respectively. On clinical datasets evaluations, the proposed method achieves an average reprojection error of 3.7 pixels and a consistent performance in multi-image correspondence of sequential images. Furthermore, we also present a surface reconstruction result from rhinoscopic images to validate the reliability of our method, which shows high-quality feature matching results.
The proposed motion consensus-based feature matching method is proved effective and robust for endoscopic images correspondence. This demonstrates its capability to generate reliable feature matches for surface reconstruction and other meaningful applications in MIS scenarios.
内镜图像特征匹配在许多临床应用中至关重要,例如目标跟踪和曲面重建。然而,由于存在低纹理、镜面反射和变形等问题,自然场景中的特征匹配方法在微创手术 (MIS) 场景中面临巨大挑战。我们提出了一种新的基于运动一致性的内镜图像特征匹配方法来解决这些问题。
我们的方法首先使用球面投影模型校正径向失真,并使用自适应检测方法去除镜面反射区域,从而消除图像失真并减少异常值的数量。我们使用两阶段策略解决匹配问题,逐步估计一致的内点;结果是一个精确平滑的运动场。首先,我们从候选特征匹配中构建一个空间运动场,并使用期望最大化算法估计其最大后验概率,该算法计算效率高,能够快速获得平滑的运动场。其次,我们将平滑运动场扩展到仿射域,并使用双边回归进行细化,以保留局部细微运动。通过检查特征运动与估计场之间的差异,可以识别真实匹配。
我们在两个变形模拟数据集(218 张图像)和四个不同类型的内镜数据集(1032 张图像)上进行了评估。我们的方法与其他三种最先进的方法进行了比较,在仿射变换和非刚体变形模拟中表现最佳,内点率分别为 86.7%和 94.3%,灵敏度分别为 90.0%和 96.2%,精度分别为 88.2%和 93.9%,F1-Score 分别为 89.1%和 95.0%。在临床数据集评估中,该方法的平均重投影误差为 3.7 像素,并且在连续图像的多图像对应中表现一致。此外,我们还展示了鼻内镜图像的曲面重建结果,以验证我们方法的可靠性,结果显示了高质量的特征匹配结果。
所提出的基于运动一致性的特征匹配方法对于内镜图像对应是有效和鲁棒的。这表明它有能力为 MIS 场景中的曲面重建和其他有意义的应用生成可靠的特征匹配。