• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于扫视的框架,用于使用基于事件的视觉传感器进行实时运动分割。

A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors.

作者信息

Mishra Abhishek, Ghosh Rohan, Principe Jose C, Thakor Nitish V, Kukreja Sunil L

机构信息

Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore.

Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, USA.

出版信息

Front Neurosci. 2017 Mar 3;11:83. doi: 10.3389/fnins.2017.00083. eCollection 2017.

DOI:10.3389/fnins.2017.00083
PMID:28316563
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5334512/
Abstract

Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%.

摘要

运动分割是自主机器人系统的关键预处理步骤,有助于在杂乱环境中跟踪移动物体。基于事件的传感器是低功耗模拟设备,通过仅在高时间分辨率下对动态细节进行异步信息更新来表示场景,因此所需计算量显著减少。然而,由于数据异步性,使用时空数据进行运动分割是一项具有挑战性的任务。使用神经形态传感器进行目标跟踪的先前方法在传感器静止或有要跟踪物体的已知模型时表现良好。为了解决这些限制,在本文中,我们开发了一种基于跨时间帧空间统计的广义运动分割技术。首先,受人类眼球跳动的启发,我们在平台上创建微运动,以促进场景中静态和动态元素的分离。其次,我们引入了“[此处原文缺失相关概念内容]”的概念,作为划分时空事件组的一种方法,这有助于计算场景统计信息并对其中的物体进行特征描述。实验结果表明,我们算法能够以移动相机对动态物体进行分类,最高准确率达到92%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/7029ebbdce1f/fnins-11-00083-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/e9fddbf366db/fnins-11-00083-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/f7517d52fbcd/fnins-11-00083-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/adb642695a8d/fnins-11-00083-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/0fcdae7a7e92/fnins-11-00083-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/b98dbab596ad/fnins-11-00083-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/fb08cd02a045/fnins-11-00083-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/700c84fc2e3d/fnins-11-00083-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/9fa37cc4305f/fnins-11-00083-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/7029ebbdce1f/fnins-11-00083-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/e9fddbf366db/fnins-11-00083-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/f7517d52fbcd/fnins-11-00083-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/adb642695a8d/fnins-11-00083-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/0fcdae7a7e92/fnins-11-00083-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/b98dbab596ad/fnins-11-00083-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/fb08cd02a045/fnins-11-00083-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/700c84fc2e3d/fnins-11-00083-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/9fa37cc4305f/fnins-11-00083-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3f/5334512/7029ebbdce1f/fnins-11-00083-g0009.jpg

相似文献

1
A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors.一种基于扫视的框架,用于使用基于事件的视觉传感器进行实时运动分割。
Front Neurosci. 2017 Mar 3;11:83. doi: 10.3389/fnins.2017.00083. eCollection 2017.
2
Event-Based Color Segmentation With a High Dynamic Range Sensor.基于事件的高动态范围传感器颜色分割
Front Neurosci. 2018 Apr 11;12:135. doi: 10.3389/fnins.2018.00135. eCollection 2018.
3
Spike-Based Motion Estimation for Object Tracking Through Bio-Inspired Unsupervised Learning.基于尖峰的运动估计,用于通过生物启发式无监督学习进行目标跟踪。
IEEE Trans Image Process. 2023;32:335-349. doi: 10.1109/TIP.2022.3228168. Epub 2022 Dec 21.
4
Bio-mimetic high-speed target localization with fused frame and event vision for edge application.用于边缘应用的融合帧与事件视觉的仿生高速目标定位
Front Neurosci. 2022 Nov 25;16:1010302. doi: 10.3389/fnins.2022.1010302. eCollection 2022.
5
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors.使用基于事件的动态视觉传感器的低延迟线条跟踪
Front Neurorobot. 2018 Feb 19;12:4. doi: 10.3389/fnbot.2018.00004. eCollection 2018.
6
Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors.基于跟踪-检测框架的车辆单目视觉传感器。
Sensors (Basel). 2019 Jan 29;19(3):560. doi: 10.3390/s19030560.
7
HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.基于事件的时间曲面层次结构用于模式识别。
IEEE Trans Pattern Anal Mach Intell. 2017 Jul 1;39(7):1346-1359. doi: 10.1109/TPAMI.2016.2574707.
8
Event-Based Motion Segmentation With Spatio-Temporal Graph Cuts.基于事件的时空图割运动分割
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4868-4880. doi: 10.1109/TNNLS.2021.3124580. Epub 2023 Aug 4.
9
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking.EVtracker:一种用于动态目标跟踪的事件驱动时空方法。
Sensors (Basel). 2022 Aug 15;22(16):6090. doi: 10.3390/s22166090.
10
Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集
Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.

引用本文的文献

1
A Spatial-Motion-Segmentation Algorithm by Fusing EDPA and Motion Compensation.一种融合 EDPA 和运动补偿的空间-运动分割算法。
Sensors (Basel). 2022 Sep 6;22(18):6732. doi: 10.3390/s22186732.
2
Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics.面向机器人技术的视网膜神经节细胞建模与现场可编程门阵列实现
Entropy (Basel). 2018 Jun 19;20(6):475. doi: 10.3390/e20060475.

本文引用的文献

1
DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition.用于目标跟踪、动作识别和目标识别的DVS基准数据集。
Front Neurosci. 2016 Aug 31;10:405. doi: 10.3389/fnins.2016.00405. eCollection 2016.
2
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.基于事件的神经形态模拟神经平台上运动流的计算
Front Neurosci. 2016 Feb 16;10:35. doi: 10.3389/fnins.2016.00035. eCollection 2016.
3
Neuromorphic Event-Based 3D Pose Estimation.基于神经形态事件的3D姿态估计
Front Neurosci. 2016 Jan 22;9:522. doi: 10.3389/fnins.2015.00522. eCollection 2015.
4
An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking.异步神经形态事件驱动的基于部分的视觉形状跟踪。
IEEE Trans Neural Netw Learn Syst. 2015 Dec;26(12):3045-59. doi: 10.1109/TNNLS.2015.2401834. Epub 2015 Mar 18.
5
Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking.基于异步事件的多核算法在高速视觉特征跟踪中的应用。
IEEE Trans Neural Netw Learn Syst. 2015 Aug;26(8):1710-20. doi: 10.1109/TNNLS.2014.2352401. Epub 2014 Sep 16.
6
Event-driven visual attention for the humanoid robot iCub.人型机器人 iCub 的事件驱动视觉注意
Front Neurosci. 2013 Dec 13;7:234. doi: 10.3389/fnins.2013.00234. eCollection 2013.
7
Semi-supervised video segmentation using tree structured graphical models.基于树状图结构模型的半监督视频分割。
IEEE Trans Pattern Anal Mach Intell. 2013 Nov;35(11):2751-64. doi: 10.1109/TPAMI.2013.54.
8
Robust Object Tracking with Online Multiple Instance Learning.基于在线多示例学习的鲁棒目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2011 Aug;33(8):1619-32. doi: 10.1109/TPAMI.2010.226. Epub 2010 Dec 23.
9
Microsaccades: small steps on a long way.微扫视:漫长道路上的小步伐。
Vision Res. 2009 Oct;49(20):2415-41. doi: 10.1016/j.visres.2009.08.010. Epub 2009 Aug 13.
10
CAVIAR: a 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing- learning-actuating system for high-speed visual object recognition and tracking.CAVIAR:一个用于高速视觉目标识别与跟踪的拥有4.5万个神经元、500万个突触、每秒120亿次连接的气动神经硬件传感-处理-学习-驱动系统。
IEEE Trans Neural Netw. 2009 Sep;20(9):1417-38. doi: 10.1109/TNN.2009.2023653. Epub 2009 Jul 24.