Brata Komang Candra, Funabiki Nobuo, Panduman Yohanes Yohanie Fridelin, Fajrianti Evianita Dewi
Graduate School of Natural Science and Technology, Okayama University, Okayama 700-8530, Japan.
Department of Informatics Engineering, Universitas Brawijaya, Malang 65145, Indonesia.
Sensors (Basel). 2024 Feb 9;24(4):1161. doi: 10.3390/s24041161.
Outdoor applications require precise positioning for seamless integrations of virtual content into immersive experiences. However, common solutions in outdoor LAR applications rely on traditional smartphone sensor fusion methods, such as the and compasses, which often lack the accuracy needed for precise AR content alignments. In this paper, we introduce an innovative approach to enhance anchor precision in outdoor environments. We leveraged technology, in combination with innovative cloud-based methodologies, and harnessed the extensive visual reference database of , to address the accuracy limitation problems. For the evaluation, 10 locations were used as anchor point coordinates in the experiments. We compared the accuracies between our approach and the common sensor fusion LAR solution comprehensively involving accuracy benchmarking and running load performance testing. The results demonstrate substantial enhancements in overall positioning accuracies compared to conventional GPS-based approaches for aligning AR anchor content in the real world.
户外应用需要精确的定位,以便将虚拟内容无缝集成到沉浸式体验中。然而,户外增强现实(LAR)应用中的常见解决方案依赖于传统的智能手机传感器融合方法,如加速度计和指南针,这些方法往往缺乏精确对齐增强现实(AR)内容所需的精度。在本文中,我们介绍了一种创新方法,以提高户外环境中的定位锚点精度。我们利用计算机视觉技术,结合创新的基于云的方法,并利用谷歌街景的广泛视觉参考数据库,来解决精度限制问题。为了进行评估,实验中使用了10个地理位置作为锚点坐标。我们全面比较了我们的方法与常见的传感器融合LAR解决方案之间的精度,包括精度基准测试和运行负载性能测试。结果表明,与传统的基于全球定位系统(GPS)的方法相比,在现实世界中对齐AR锚点内容时,整体定位精度有了显著提高。