Zhang Yisa, Lv Hengyi, Zhao Yuchen, Feng Yang, Liu Hailong, Bi Guoling
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
College of Materials Science and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing 100049, China.
Micromachines (Basel). 2023 Jan 13;14(1):203. doi: 10.3390/mi14010203.
The advantages of an event camera, such as low power consumption, large dynamic range, and low data redundancy, enable it to shine in extreme environments where traditional image sensors are not competent, especially in high-speed moving target capture and extreme lighting conditions. Optical flow reflects the target's movement information, and the target's detailed movement can be obtained using the event camera's optical flow information. However, the existing neural network methods for optical flow prediction of event cameras has the problems of extensive computation and high energy consumption in hardware implementation. The spike neural network has spatiotemporal coding characteristics, so it can be compatible with the spatiotemporal data of an event camera. Moreover, the sparse coding characteristic of the spike neural network makes it run with ultra-low power consumption on neuromorphic hardware. However, because of the algorithmic and training complexity, the spike neural network has not been applied in the prediction of the optical flow for the event camera. For this case, this paper proposes an end-to-end spike neural network to predict the optical flow of the discrete spatiotemporal data stream for the event camera. The network is trained with the spatio-temporal backpropagation method in a self-supervised way, which fully combines the spatiotemporal characteristics of the event camera while improving the network performance. Compared with the existing methods on the public dataset, the experimental results show that the method proposed in this paper is equivalent to the best existing methods in terms of optical flow prediction accuracy, and it can save 99% more power consumption than the existing algorithm, which is greatly beneficial to the hardware implementation of the event camera optical flow prediction., laying the groundwork for future low-power hardware implementation of optical flow prediction for event cameras.
事件相机具有低功耗、大动态范围和低数据冗余等优点,使其在传统图像传感器无法胜任的极端环境中脱颖而出,特别是在高速移动目标捕获和极端光照条件下。光流反映了目标的运动信息,利用事件相机的光流信息可以获得目标的详细运动情况。然而,现有的用于事件相机光流预测的神经网络方法在硬件实现中存在计算量大和能耗高的问题。脉冲神经网络具有时空编码特性,因此它可以与事件相机的时空数据兼容。此外,脉冲神经网络的稀疏编码特性使其在神经形态硬件上以超低功耗运行。然而,由于算法和训练的复杂性,脉冲神经网络尚未应用于事件相机的光流预测。针对这种情况,本文提出了一种端到端的脉冲神经网络,用于预测事件相机离散时空数据流的光流。该网络采用时空反向传播方法进行自监督训练,在充分结合事件相机时空特性的同时提高了网络性能。与公共数据集上的现有方法相比,实验结果表明,本文提出的方法在光流预测精度方面与现有最佳方法相当,并且比现有算法可节省99%以上的功耗,这对事件相机光流预测的硬件实现非常有利,为未来事件相机光流预测的低功耗硬件实现奠定了基础。