Tenzin Sangay, Rassau Alexander, Chai Douglas
School of Engineering, Edith Cowan University, Perth, WA 6027, Australia.
Biomimetics (Basel). 2024 Jul 20;9(7):444. doi: 10.3390/biomimetics9070444.
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality.
同时定位与地图构建(SLAM)是大多数自主系统的一项关键功能,使它们能够在陌生环境中导航并创建地图。传统视觉SLAM,通常也称为VSLAM,依赖基于帧的相机和结构化处理管道,在动态或低光照环境中面临挑战。然而,事件相机技术和神经形态处理的最新进展为克服这些限制提供了有希望的机会。受生物视觉系统启发的事件相机异步捕捉场景,功耗极低但具有更高的时间分辨率。神经形态处理器旨在模仿人类大脑的并行处理能力,为基于事件的数据流的实时数据处理提供高效计算。本文全面概述了将事件相机和神经形态处理器集成到VSLAM系统中的最新研究成果。它讨论了事件相机和神经形态处理器背后的原理,强调了它们相对于传统传感和处理方法的优势。此外,还对基于事件的SLAM的最新方法进行了深入调查,包括特征提取、运动估计和地图重建技术。此外,还探讨了事件相机与神经形态处理器的集成,重点关注它们在能源效率、鲁棒性和实时性能方面的协同优势。本文还讨论了这一新兴领域的挑战和开放研究问题,如传感器校准、数据融合和算法开发。最后,概述了基于事件的SLAM系统的潜在应用和未来发展方向,从机器人技术和自动驾驶车辆到增强现实。