Ren Hongwei, Zhou Yue, Zhu Jiadong, Lin Xiaopeng, Fu Haotian, Huang Yulong, Fang Yuetong, Ma Fei, Yu Hao, Cheng Bojun
IEEE Trans Pattern Anal Mach Intell. 2025 Aug;47(8):6228-6241. doi: 10.1109/TPAMI.2025.3556561.
Event cameras draw inspiration from biological systems, boasting low latency and high dynamic range while consuming minimal power. The most current approach to processing Event Cloud often involves converting it into frame-based representations, which neglects the sparsity of events, loses fine-grained temporal information, and increases the computational burden. In contrast, Point Cloud is a popular representation for processing 3-dimensional data and serves as an alternative method to exploit local and global spatial features. Nevertheless, previous point-based methods show an unsatisfactory performance compared to the frame-based method in dealing with spatio-temporal event streams. In order to bridge the gap, we propose EventMamba, an efficient and effective framework based on Point Cloud representation by rethinking the distinction between Event Cloud and Point Cloud, emphasizing vital temporal information. The Event Cloud is subsequently fed into a hierarchical structure with staged modules to process both implicit and explicit temporal features. Specifically, we redesign the global extractor to enhance explicit temporal extraction among a long sequence of events with temporal aggregation and State Space Model (SSM) based Mamba. Our model consumes minimal computational resources in the experiments and still exhibits SOTA point-based performance on six different scales of action recognition datasets. It even outperformed all frame-based methods on both Camera Pose Relocalization (CPR) and eye-tracking regression tasks.
事件相机的灵感来源于生物系统,具有低延迟、高动态范围且功耗极低的特点。当前处理事件云的方法通常是将其转换为基于帧的表示形式,这忽略了事件的稀疏性,丢失了细粒度的时间信息,并增加了计算负担。相比之下,点云是一种用于处理三维数据的流行表示形式,也是一种利用局部和全局空间特征的替代方法。然而,在处理时空事件流时,与基于帧的方法相比,以前基于点的方法表现并不理想。为了弥合这一差距,我们提出了EventMamba,这是一个基于点云表示的高效框架,通过重新思考事件云和点云之间的区别,强调重要的时间信息。随后,事件云被输入到一个具有分级模块的层次结构中,以处理隐式和显式时间特征。具体来说,我们重新设计了全局提取器,通过时间聚合和基于状态空间模型(SSM)的Mamba来增强在长事件序列中的显式时间提取。在实验中,我们的模型消耗的计算资源最少,并且在六个不同规模的动作识别数据集上仍表现出基于点的最优性能。在相机姿态重定位(CPR)和眼动追踪回归任务上,它甚至超过了所有基于帧的方法。