Tang Sichao, Zhao Yuchen, Lv Hengyi, Sun Ming, Feng Yang, Zhang Zeshu
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
University of Chinese Academy of Sciences, Beijing 100049, China.
Sensors (Basel). 2024 Nov 21;24(23):7430. doi: 10.3390/s24237430.
Event cameras, as bio-inspired visual sensors, offer significant advantages in their high dynamic range and high temporal resolution for visual tasks. These capabilities enable efficient and reliable motion estimation even in the most complex scenes. However, these advantages come with certain trade-offs. For instance, current event-based vision sensors have low spatial resolution, and the process of event representation can result in varying degrees of data redundancy and incompleteness. Additionally, due to the inherent characteristics of event stream data, they cannot be utilized directly; pre-processing steps such as slicing and frame compression are required. Currently, various pre-processing algorithms exist for slicing and compressing event streams. However, these methods fall short when dealing with multiple subjects moving at different and varying speeds within the event stream, potentially exacerbating the inherent deficiencies of the event information flow. To address this longstanding issue, we propose a novel and efficient Asynchronous Spike Dynamic Metric and Slicing algorithm (ASDMS). ASDMS adaptively segments the event stream into fragments of varying lengths based on the spatiotemporal structure and polarity attributes of the events. Moreover, we introduce a new Adaptive Spatiotemporal Subject Surface Compensation algorithm (ASSSC). ASSSC compensates for missing motion information in the event stream and removes redundant information, thereby achieving better performance and effectiveness in event stream segmentation compared to existing event representation algorithms. Additionally, after compressing the processed results into frame images, the imaging quality is significantly improved. Finally, we propose a new evaluation metric, the Actual Performance Efficiency Discrepancy (APED), which combines actual distortion rate and event information entropy to quantify and compare the effectiveness of our method against other existing event representation methods. The final experimental results demonstrate that our event representation method outperforms existing approaches and addresses the shortcomings of current methods in handling event streams with multiple entities moving at varying speeds simultaneously.
事件相机作为受生物启发的视觉传感器,在视觉任务的高动态范围和高时间分辨率方面具有显著优势。这些能力使得即使在最复杂的场景中也能进行高效可靠的运动估计。然而,这些优势也伴随着一定的权衡。例如,当前基于事件的视觉传感器空间分辨率较低,并且事件表示过程可能会导致不同程度的数据冗余和不完整性。此外,由于事件流数据的固有特性,它们不能直接使用;需要进行诸如切片和帧压缩等预处理步骤。目前,存在各种用于切片和压缩事件流的预处理算法。然而,这些方法在处理事件流中以不同且变化的速度移动的多个对象时存在不足,可能会加剧事件信息流的固有缺陷。为了解决这个长期存在的问题,我们提出了一种新颖且高效的异步脉冲动态度量和切片算法(ASDMS)。ASDMS根据事件的时空结构和极性属性将事件流自适应地分割成不同长度的片段。此外,我们引入了一种新的自适应时空对象表面补偿算法(ASSSC)。ASSSC补偿事件流中缺失的运动信息并去除冗余信息,从而在事件流分割方面比现有的事件表示算法具有更好的性能和有效性。此外,在将处理结果压缩成帧图像后,成像质量得到显著提高。最后,我们提出了一种新的评估指标,即实际性能效率差异(APED),它结合实际失真率和事件信息熵来量化和比较我们的方法与其他现有事件表示方法的有效性。最终实验结果表明,我们的事件表示方法优于现有方法,并解决了当前方法在处理具有多个以不同速度同时移动的实体的事件流时的缺点。