Yin Xiaoting, Shi Hao, Bao Yuhan, Bing Zhenshan, Liao Yiyi, Yang Kailun, Wang Kaiwei
Appl Opt. 2025 May 10;64(14):3897-3908. doi: 10.1364/AO.557565.
Achieving 3D reconstruction from images captured under optimal conditions has been extensively studied in the vision and imaging fields. However, in real-world scenarios, challenges such as motion blur and insufficient illumination often limit the performance of standard frame-based cameras in delivering high-quality images. To address these limitations, we incorporate a transmittance adjustment device at the hardware level, enabling event cameras to capture both motion and exposure events for diverse 3D reconstruction scenarios. Motion events (triggered by camera or object movement) are collected in fast-motion scenarios when the device is inactive, while exposure events (generated through controlled camera exposure) are captured during slower motion to reconstruct grayscale images for high-quality training and optimization of event-based 3D Gaussian splatting (3DGS). Our framework supports three modes: high-quality reconstruction using exposure events, fast reconstruction relying on motion events, and balanced hybrid optimizing with initial exposure events followed by high-speed motion events. On the EventNeRF dataset, we demonstrate that exposure events significantly improve fine detail reconstruction compared to motion events and outperform frame-based cameras under challenging conditions such as low illumination and overexposure. Furthermore, we introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds. Our method achieves faster and higher-quality reconstruction than event-based NeRF and is more cost-effective than methods combining event and RGB data. By combining motion and exposure events, E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions and lower hardware demands. The source code and dataset are available on GitHub.
在视觉和成像领域,从在最佳条件下捕获的图像实现三维重建已经得到了广泛研究。然而,在现实世界场景中,诸如运动模糊和光照不足等挑战常常限制了基于标准帧的相机在提供高质量图像方面的性能。为了解决这些限制,我们在硬件层面集成了一个透射率调整装置,使事件相机能够捕捉运动和曝光事件,以用于各种三维重建场景。运动事件(由相机或物体移动触发)在设备不活动时的快速运动场景中收集,而曝光事件(通过控制相机曝光产生)在较慢运动期间捕获,以重建灰度图像,用于基于事件的三维高斯平铺(3DGS)的高质量训练和优化。我们的框架支持三种模式:使用曝光事件的高质量重建、依靠运动事件的快速重建,以及以初始曝光事件随后是高速运动事件进行平衡混合优化。在EventNeRF数据集上,我们证明与运动事件相比,曝光事件显著改善了精细细节重建,并且在诸如低光照和过度曝光等具有挑战性的条件下优于基于帧的相机。此外,我们引入了EME - 3D,这是一个具有曝光事件、运动事件、相机校准参数和稀疏点云的真实世界三维数据集。我们的方法比基于事件的NeRF实现了更快、更高质量的重建,并且比结合事件和RGB数据的方法更具成本效益。通过结合运动和曝光事件,E - 3DGS为基于事件的三维重建设定了一个新的基准,在具有挑战性的条件下具有强大的性能且硬件需求更低。源代码和数据集可在GitHub上获取。