State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China.
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China.
Sensors (Basel). 2018 Jun 20;18(6):1974. doi: 10.3390/s18061974.
Indoor positioning is in high demand in a variety of applications, and indoor environment is a challenging scene for visual positioning. This paper proposes an accurate visual positioning method for smartphones. The proposed method includes three procedures. First, an indoor high-precision 3D photorealistic map is produced using a mobile mapping system, and the intrinsic and extrinsic parameters of the images are obtained from the mapping result. A point cloud is calculated using feature matching and multi-view forward intersection. Second, top-K similar images are queried using hamming embedding with SIFT feature description. Feature matching and pose voting are used to select correctly matched image, and the relationship between image points and 3D points is obtained. Finally, outlier points are removed using P3P with the coarse focal length. Perspective-four-point with unknown focal length and random sample consensus are used to calculate the intrinsic and extrinsic parameters of the query image and then to obtain the positioning of the smartphone. Compared with established baseline methods, the proposed method is more accurate and reliable. The experiment results show that 70 percent of the images achieve location error smaller than 0.9 m in a 10 m × 15.8 m room, and the prospect of improvement is discussed.
室内定位在各种应用中需求很高,而室内环境是视觉定位极具挑战性的场景。本文提出了一种智能手机的精确视觉定位方法。该方法包括三个步骤。首先,使用移动测绘系统生成室内高精度的 3D 逼真地图,并从测绘结果中获取图像的内参和外参。然后通过特征匹配和多视图前向交会计算点云。接下来,使用 SIFT 特征描述的汉明嵌入查询前 K 个相似图像。通过特征匹配和位姿投票选择正确匹配的图像,并获取图像点与 3D 点之间的关系。最后,使用粗焦距的 P3P 去除离群点,使用未知焦距的透视四点和随机抽样一致性计算查询图像的内参和外参,从而获得智能手机的定位。与已建立的基线方法相比,该方法更准确可靠。实验结果表明,在 10 m×15.8 m 的房间中,70%的图像的位置误差小于 0.9 m,讨论了改进的前景。