• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于基于事件的相机的高效稀疏卷积算子

High-efficiency sparse convolution operator for event-based cameras.

作者信息

Zhang Sen, Zha Fusheng, Wang Xiangji, Li Mantian, Guo Wei, Wang Pengfei, Li Xiaolin, Sun Lining

机构信息

State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.

Lanzhou University of Technology, Lanzhou, China.

出版信息

Front Neurorobot. 2025 Mar 12;19:1537673. doi: 10.3389/fnbot.2025.1537673. eCollection 2025.

DOI:10.3389/fnbot.2025.1537673
PMID:40144017
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11936924/
Abstract

Event-based cameras are bio-inspired vision sensors that mimic the sparse and asynchronous activation of the animal retina, offering advantages such as low latency and low computational load in various robotic applications. However, despite their inherent sparsity, most existing visual processing algorithms are optimized for conventional standard cameras and dense images captured from them, resulting in computational redundancy and high latency when applied to event-based cameras. To address this gap, we propose a sparse convolution operator tailored for event-based cameras. By selectively skipping invalid sub-convolutions and efficiently reorganizing valid computations, our operator reduces computational workload by nearly 90% and achieves almost 2× acceleration in processing speed, while maintaining the same accuracy as dense convolution operators. This innovation unlocks the potential of event-based cameras in applications such as autonomous navigation, real-time object tracking, and industrial inspection, enabling low-latency and high-efficiency perception in resource-constrained robotic systems.

摘要

基于事件的相机是受生物启发的视觉传感器,它模仿动物视网膜的稀疏和异步激活,在各种机器人应用中具有低延迟和低计算量等优点。然而,尽管其具有固有的稀疏性,但大多数现有的视觉处理算法是针对传统标准相机及其捕获的密集图像进行优化的,应用于基于事件的相机时会导致计算冗余和高延迟。为了弥补这一差距,我们提出了一种专门为基于事件的相机量身定制的稀疏卷积算子。通过有选择地跳过无效的子卷积并有效地重组有效计算,我们的算子将计算工作量减少了近90%,处理速度提高了近两倍,同时保持与密集卷积算子相同的精度。这一创新释放了基于事件的相机在自主导航、实时目标跟踪和工业检测等应用中的潜力,能够在资源受限的机器人系统中实现低延迟和高效感知。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/12b7332bd95e/fnbot-19-1537673-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/e58652274ec7/fnbot-19-1537673-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/9aa49c469bb8/fnbot-19-1537673-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/3f4a81cef636/fnbot-19-1537673-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/ff15334097b8/fnbot-19-1537673-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/bf93a50977f0/fnbot-19-1537673-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/12b7332bd95e/fnbot-19-1537673-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/e58652274ec7/fnbot-19-1537673-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/9aa49c469bb8/fnbot-19-1537673-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/3f4a81cef636/fnbot-19-1537673-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/ff15334097b8/fnbot-19-1537673-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/bf93a50977f0/fnbot-19-1537673-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e7a/11936924/12b7332bd95e/fnbot-19-1537673-g0006.jpg

相似文献

1
High-efficiency sparse convolution operator for event-based cameras.用于基于事件的相机的高效稀疏卷积算子
Front Neurorobot. 2025 Mar 12;19:1537673. doi: 10.3389/fnbot.2025.1537673. eCollection 2025.
2
Low-latency automotive vision with event cameras.具有事件相机的低延迟汽车视觉。
Nature. 2024 May;629(8014):1034-1040. doi: 10.1038/s41586-024-07409-w. Epub 2024 May 29.
3
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking.EVtracker:一种用于动态目标跟踪的事件驱动时空方法。
Sensors (Basel). 2022 Aug 15;22(16):6090. doi: 10.3390/s22166090.
4
An Asynchronous Real-Time Corner Extraction and Tracking Algorithm for Event Camera.一种用于事件相机的异步实时角点提取与跟踪算法。
Sensors (Basel). 2021 Feb 20;21(4):1475. doi: 10.3390/s21041475.
5
Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps.基于事件的、从光度深度图进行的6自由度相机跟踪。
IEEE Trans Pattern Anal Mach Intell. 2018 Oct;40(10):2402-2412. doi: 10.1109/TPAMI.2017.2769655. Epub 2017 Nov 3.
6
EvAn: Neuromorphic Event-Based Sparse Anomaly Detection.EvAn:基于神经形态事件的稀疏异常检测
Front Neurosci. 2021 Jul 29;15:699003. doi: 10.3389/fnins.2021.699003. eCollection 2021.
7
Bio-mimetic high-speed target localization with fused frame and event vision for edge application.用于边缘应用的融合帧与事件视觉的仿生高速目标定位
Front Neurosci. 2022 Nov 25;16:1010302. doi: 10.3389/fnins.2022.1010302. eCollection 2022.
8
EX-Gaze: High-Frequency and Low-Latency Gaze Tracking with Hybrid Event-Frame Cameras for On-Device Extended Reality.EX-Gaze:用于设备端扩展现实的混合事件帧相机实现的高频低延迟注视跟踪
IEEE Trans Vis Comput Graph. 2025 May;31(5):2299-2309. doi: 10.1109/TVCG.2025.3549565. Epub 2025 Apr 25.
9
A recurrent YOLOv8-based framework for event-based object detection.一种用于基于事件的目标检测的循环YOLOv8框架。
Front Neurosci. 2025 Jan 22;18:1477979. doi: 10.3389/fnins.2024.1477979. eCollection 2024.
10
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors.使用基于事件的动态视觉传感器的低延迟线条跟踪
Front Neurorobot. 2018 Feb 19;12:4. doi: 10.3389/fnbot.2018.00004. eCollection 2018.

本文引用的文献

1
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.
2
High Speed and High Dynamic Range Video with an Event Camera.基于事件相机的高速高动态范围视频
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):1964-1980. doi: 10.1109/TPAMI.2019.2963386. Epub 2021 May 11.
3
Neuromorphic Vision Datasets for Pedestrian Detection, Action Recognition, and Fall Detection.用于行人检测、动作识别和跌倒检测的神经形态视觉数据集。
Front Neurorobot. 2019 Jun 18;13:38. doi: 10.3389/fnbot.2019.00038. eCollection 2019.
4
SECOND: Sparsely Embedded Convolutional Detection.第二:稀疏嵌入卷积检测。
Sensors (Basel). 2018 Oct 6;18(10):3337. doi: 10.3390/s18103337.
5
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.利用扫视将静态图像数据集转换为脉冲神经形态数据集
Front Neurosci. 2015 Nov 16;9:437. doi: 10.3389/fnins.2015.00437. eCollection 2015.
6
HFirst: A Temporal Approach to Object Recognition.HFirst:一种基于时间的目标识别方法。
IEEE Trans Pattern Anal Mach Intell. 2015 Oct;37(10):2028-40. doi: 10.1109/TPAMI.2015.2392947.
7
On event-based optical flow detection.基于事件的光流检测
Front Neurosci. 2015 Apr 20;9:137. doi: 10.3389/fnins.2015.00137. eCollection 2015.
8
Event-based visual flow.基于事件的视觉流。
IEEE Trans Neural Netw Learn Syst. 2014 Feb;25(2):407-17. doi: 10.1109/TNNLS.2013.2273537.