• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于事件的光流估计,采用时空反向传播训练的脉冲神经网络。

Event-Based Optical Flow Estimation with Spatio-Temporal Backpropagation Trained Spiking Neural Network.

作者信息

Zhang Yisa, Lv Hengyi, Zhao Yuchen, Feng Yang, Liu Hailong, Bi Guoling

机构信息

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.

College of Materials Science and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Micromachines (Basel). 2023 Jan 13;14(1):203. doi: 10.3390/mi14010203.

DOI:10.3390/mi14010203
PMID:36677264
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9867051/
Abstract

The advantages of an event camera, such as low power consumption, large dynamic range, and low data redundancy, enable it to shine in extreme environments where traditional image sensors are not competent, especially in high-speed moving target capture and extreme lighting conditions. Optical flow reflects the target's movement information, and the target's detailed movement can be obtained using the event camera's optical flow information. However, the existing neural network methods for optical flow prediction of event cameras has the problems of extensive computation and high energy consumption in hardware implementation. The spike neural network has spatiotemporal coding characteristics, so it can be compatible with the spatiotemporal data of an event camera. Moreover, the sparse coding characteristic of the spike neural network makes it run with ultra-low power consumption on neuromorphic hardware. However, because of the algorithmic and training complexity, the spike neural network has not been applied in the prediction of the optical flow for the event camera. For this case, this paper proposes an end-to-end spike neural network to predict the optical flow of the discrete spatiotemporal data stream for the event camera. The network is trained with the spatio-temporal backpropagation method in a self-supervised way, which fully combines the spatiotemporal characteristics of the event camera while improving the network performance. Compared with the existing methods on the public dataset, the experimental results show that the method proposed in this paper is equivalent to the best existing methods in terms of optical flow prediction accuracy, and it can save 99% more power consumption than the existing algorithm, which is greatly beneficial to the hardware implementation of the event camera optical flow prediction., laying the groundwork for future low-power hardware implementation of optical flow prediction for event cameras.

摘要

事件相机具有低功耗、大动态范围和低数据冗余等优点,使其在传统图像传感器无法胜任的极端环境中脱颖而出,特别是在高速移动目标捕获和极端光照条件下。光流反映了目标的运动信息,利用事件相机的光流信息可以获得目标的详细运动情况。然而,现有的用于事件相机光流预测的神经网络方法在硬件实现中存在计算量大和能耗高的问题。脉冲神经网络具有时空编码特性,因此它可以与事件相机的时空数据兼容。此外,脉冲神经网络的稀疏编码特性使其在神经形态硬件上以超低功耗运行。然而,由于算法和训练的复杂性,脉冲神经网络尚未应用于事件相机的光流预测。针对这种情况,本文提出了一种端到端的脉冲神经网络,用于预测事件相机离散时空数据流的光流。该网络采用时空反向传播方法进行自监督训练,在充分结合事件相机时空特性的同时提高了网络性能。与公共数据集上的现有方法相比,实验结果表明,本文提出的方法在光流预测精度方面与现有最佳方法相当,并且比现有算法可节省99%以上的功耗,这对事件相机光流预测的硬件实现非常有利,为未来事件相机光流预测的低功耗硬件实现奠定了基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/bb49e384eabc/micromachines-14-00203-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/40a40f32098f/micromachines-14-00203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/fe80baa0eeac/micromachines-14-00203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/1b4c556b938a/micromachines-14-00203-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/c558d55e3529/micromachines-14-00203-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/45721e6cc456/micromachines-14-00203-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/306e48a4162a/micromachines-14-00203-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/779fd2906e11/micromachines-14-00203-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/bb49e384eabc/micromachines-14-00203-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/40a40f32098f/micromachines-14-00203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/fe80baa0eeac/micromachines-14-00203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/1b4c556b938a/micromachines-14-00203-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/c558d55e3529/micromachines-14-00203-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/45721e6cc456/micromachines-14-00203-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/306e48a4162a/micromachines-14-00203-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/779fd2906e11/micromachines-14-00203-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1e2/9867051/bb49e384eabc/micromachines-14-00203-g008.jpg

相似文献

1
Event-Based Optical Flow Estimation with Spatio-Temporal Backpropagation Trained Spiking Neural Network.基于事件的光流估计,采用时空反向传播训练的脉冲神经网络。
Micromachines (Basel). 2023 Jan 13;14(1):203. doi: 10.3390/mi14010203.
2
Memristor-CMOS Hybrid Circuits Implementing Event-Driven Neural Networks for Dynamic Vision Sensor Camera.用于动态视觉传感器相机的实现事件驱动神经网络的忆阻器-互补金属氧化物半导体混合电路。
Micromachines (Basel). 2024 Mar 22;15(4):426. doi: 10.3390/mi15040426.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
Braille letter reading: A benchmark for spatio-temporal pattern recognition on neuromorphic hardware.盲文阅读:神经形态硬件上时空模式识别的一个基准。
Front Neurosci. 2022 Nov 11;16:951164. doi: 10.3389/fnins.2022.951164. eCollection 2022.
5
Optical flow estimation from event-based cameras and spiking neural networks.基于事件相机和脉冲神经网络的光流估计
Front Neurosci. 2023 May 11;17:1160034. doi: 10.3389/fnins.2023.1160034. eCollection 2023.
6
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets.尖峰序列水平直接反馈对齐:用于脉冲神经网络片上训练的避开反向传播方法
Front Neurosci. 2020 Mar 13;14:143. doi: 10.3389/fnins.2020.00143. eCollection 2020.
7
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
8
Synthesizing Images From Spatio-Temporal Representations Using Spike-Based Backpropagation.使用基于脉冲的反向传播从时空表示中合成图像。
Front Neurosci. 2019 Jun 18;13:621. doi: 10.3389/fnins.2019.00621. eCollection 2019.
9
Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks.深度脉冲神经网络中用于反向传播的整流线性突触后电位函数
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1947-1958. doi: 10.1109/TNNLS.2021.3110991. Epub 2022 May 2.
10
Bio-mimetic high-speed target localization with fused frame and event vision for edge application.用于边缘应用的融合帧与事件视觉的仿生高速目标定位
Front Neurosci. 2022 Nov 25;16:1010302. doi: 10.3389/fnins.2022.1010302. eCollection 2022.

引用本文的文献

1
Energy-Efficient Spiking Segmenter for Frame and Event-Based Images.用于基于帧和事件的图像的节能脉冲分割器
Biomimetics (Basel). 2023 Aug 10;8(4):356. doi: 10.3390/biomimetics8040356.
2
Optical flow estimation from event-based cameras and spiking neural networks.基于事件相机和脉冲神经网络的光流估计
Front Neurosci. 2023 May 11;17:1160034. doi: 10.3389/fnins.2023.1160034. eCollection 2023.

本文引用的文献

1
Adaptive Slicing Method of the Spatiotemporal Event Stream Obtained from a Dynamic Vision Sensor.基于动态视觉传感器的时空事件流的自适应切片方法。
Sensors (Basel). 2022 Mar 29;22(7):2614. doi: 10.3390/s22072614.
2
ESPEE: Event-Based Sensor Pose Estimation Using an Extended Kalman Filter.ESPEE:基于事件的传感器姿态估计,使用扩展卡尔曼滤波器。
Sensors (Basel). 2021 Nov 25;21(23):7840. doi: 10.3390/s21237840.
3
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.
4
FLGR: Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition.FLGR:用于基于RNN-HMM混合模型的神经形态连续手势识别的定长要点表示学习
Front Neurosci. 2019 Feb 12;13:73. doi: 10.3389/fnins.2019.00073. eCollection 2019.
5
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
6
On event-based optical flow detection.基于事件的光流检测
Front Neurosci. 2015 Apr 20;9:137. doi: 10.3389/fnins.2015.00137. eCollection 2015.
7
Event-based visual flow.基于事件的视觉流。
IEEE Trans Neural Netw Learn Syst. 2014 Feb;25(2):407-17. doi: 10.1109/TNNLS.2013.2273537.
8
Asynchronous frameless event-based optical flow.异步无框架基于事件的光流。
Neural Netw. 2012 Mar;27:32-7. doi: 10.1016/j.neunet.2011.11.001. Epub 2011 Nov 25.