Valeiras David Reverter, Clady Xavier, Ieng Sio-Hoi, Benosman Ryad
IEEE Trans Neural Netw Learn Syst. 2018 Sep 12. doi: 10.1109/TNNLS.2018.2807983.
This paper introduces an event-based luminance-free algorithm for line and segment detection from the output of asynchronous event-based neuromorphic retinas. These recent biomimetic vision sensors are composed of autonomous pixels, each of them asynchronously generating visual events that encode relative changes in pixels' illumination at high temporal resolutions. This frame-free approach results in an increased energy efficiency and in real-time operation, making these sensors especially suitable for applications such as autonomous robotics. The proposed algorithm is based on an iterative event-based weighted least squares fitting, and it is consequently well suited to the high temporal resolution and asynchronous acquisition of neuromorphic cameras: parameters of a current line are updated for each event attributed (i.e., spatio-temporally close) to it, while implicitly forgetting the contribution of older events according to a speed-tuned exponentially decaying function. A detection occurs if a measure of activity, i.e., implicit measure of the number of contributing events and using the same decay function, exceeds a given threshold. The speed-tuned decreasing function is based on a measure of the apparent motion, i.e., the optical flow computed around each event. This latter ensures that the algorithm behaves independently of the edges' dynamics. Line segments are then extracted from the lines, allowing for the tracking of the corresponding endpoints. We provide experiments showing the accuracy of our algorithm and study the influence of the apparent velocity and relative orientation of the observed edges. Finally, evaluations of its computational efficiency show that this algorithm can be envisioned for high-speed applications, such as vision-based robotic navigation.
本文介绍了一种基于事件的无亮度算法,用于从异步事件驱动的神经形态视网膜的输出中检测线条和线段。这些最新的仿生视觉传感器由自主像素组成,每个像素异步生成视觉事件,这些事件在高时间分辨率下编码像素光照的相对变化。这种无帧方法提高了能源效率并实现了实时操作,使这些传感器特别适用于自主机器人等应用。所提出的算法基于基于事件的迭代加权最小二乘拟合,因此非常适合神经形态相机的高时间分辨率和异步采集:对于归因于(即时空接近)当前线条的每个事件,更新其参数,同时根据速度调整的指数衰减函数隐式地忽略旧事件的贡献。如果活动度量(即对有贡献事件数量的隐式度量并使用相同的衰减函数)超过给定阈值,则会发生检测。速度调整的递减函数基于表观运动的度量,即围绕每个事件计算的光流。后者确保算法的行为与边缘的动态无关。然后从线条中提取线段,以便跟踪相应的端点。我们提供了实验来展示我们算法的准确性,并研究观察到的边缘的表观速度和相对方向的影响。最后,对其计算效率的评估表明,该算法可用于高速应用,如基于视觉的机器人导航。