Ferrão José, Dias Paulo, Santos Beatriz Sousa, Oliveira Miguel
Department of Electronics, Telecommunications, and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal.
Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal.
J Imaging. 2023 Mar 8;9(3):63. doi: 10.3390/jimaging9030063.
This work presents a novel framework for web-based environment-aware rendering and interaction in augmented reality based on WebXR and three.js. It aims at accelerating the development of device-agnostic Augmented Reality (AR) applications. The solution allows for a realistic rendering of 3D elements, handles geometry occlusion, casts shadows of virtual objects onto real surfaces, and provides physics interaction with real-world objects. Unlike most existing state-of-the-art systems that are built to run on a specific hardware configuration, the proposed solution targets the web environment and is designed to work on a vast range of devices and configurations. Our solution can use monocular camera setups with depth data estimated by deep neural networks or, when available, use higher-quality depth sensors (e.g., LIDAR, structured light) that provide a more accurate perception of the environment. To ensure consistency in the rendering of the virtual scene a physically based rendering pipeline is used, in which physically correct attributes are associated with each 3D object, which, combined with lighting information captured by the device, enables the rendering of AR content matching the environment illumination. All these concepts are integrated and optimized into a pipeline capable of providing a fluid user experience even on middle-range devices. The solution is distributed as an open-source library that can be integrated into existing and new web-based AR projects. The proposed framework was evaluated and compared in terms of performance and visual features with two state-of-the-art alternatives.
这项工作提出了一个基于WebXR和three.js的用于增强现实中基于网络的环境感知渲染和交互的新颖框架。其目的是加速与设备无关的增强现实(AR)应用程序的开发。该解决方案允许对3D元素进行逼真的渲染,处理几何遮挡,将虚拟对象的阴影投射到真实表面上,并提供与现实世界对象的物理交互。与大多数现有最先进系统是为在特定硬件配置上运行而构建不同,所提出的解决方案以网络环境为目标,旨在在各种设备和配置上运行。我们的解决方案可以使用由深度神经网络估计深度数据的单目相机设置,或者在有可用的更高质量深度传感器(例如激光雷达、结构光)时使用它们,这些传感器能提供对环境更准确的感知。为确保虚拟场景渲染的一致性,使用了基于物理的渲染管道,其中物理正确的属性与每个3D对象相关联,这与设备捕获的光照信息相结合,能够渲染与环境光照相匹配的AR内容。所有这些概念都被集成并优化到一个即使在中端设备上也能提供流畅用户体验的管道中。该解决方案作为一个开源库发布,可以集成到现有的和新的基于网络的AR项目中。所提出的框架在性能和视觉特征方面与两个最先进的替代方案进行了评估和比较。