• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

EPCNet:利用基于事件的相机和基于帧的相机的传感器融合实现“人工中央凹”以进行更高效的监测。

EPCNet: Implementing an 'Artificial Fovea' for More Efficient Monitoring Using the Sensor Fusion of an Event-Based and a Frame-Based Camera.

作者信息

Sealy Phelan Orla, Molloy Dara, George Roshan, Jones Edward, Glavin Martin, Deegan Brian

机构信息

Department of Electrical and Electronic Engineering, University of Galway, University Road, H91 TK33 Galway, Ireland.

Ryan Institute, University of Galway, University Road, H91 TK33 Galway, Ireland.

出版信息

Sensors (Basel). 2025 Jul 22;25(15):4540. doi: 10.3390/s25154540.

DOI:10.3390/s25154540
PMID:40807710
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12349060/
Abstract

Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional Neural Networks (CNNs) often have a resolution limitation, requiring images to be down-sampled before inference, causing significant information loss. Event-based cameras are neuromorphic vision sensors with high temporal resolution, low power consumption, and high dynamic range, making them preferable to regular RGB cameras in many situations. This project proposes the fusion of an event-based camera with an RGB camera to mitigate the trade-off between temporal resolution and accuracy, while minimising power consumption. The cameras are calibrated to create a multi-modal stereo vision system where pixel coordinates can be projected between the event and RGB camera image planes. This calibration is used to project bounding boxes detected by clustering of events into the RGB image plane, thereby cropping each RGB frame instead of down-sampling to meet the requirements of the CNN. Using the Common Objects in Context (COCO) dataset evaluator, the average precision (AP) for the bicycle class in RGB scenes improved from 21.08 to 57.38. Additionally, AP increased across all classes from 37.93 to 46.89. To reduce system latency, a novel object detection approach is proposed where the event camera acts as a region proposal network, and a classification algorithm is run on the proposed regions. This achieved a 78% improvement over baseline.

摘要

高效的目标检测对于自动驾驶或安全系统等实时监控应用至关重要。现代RGB相机可以生成高分辨率图像以进行精确的目标检测。然而,分辨率的提高会导致网络延迟和功耗增加。为了最小化这种延迟,卷积神经网络(CNN)通常存在分辨率限制,需要在推理前对图像进行下采样,这会导致大量信息丢失。基于事件的相机是具有高时间分辨率、低功耗和高动态范围的神经形态视觉传感器,使其在许多情况下比普通RGB相机更具优势。该项目提出将基于事件的相机与RGB相机融合,以减轻时间分辨率和准确性之间的权衡,同时将功耗降至最低。对相机进行校准以创建一个多模态立体视觉系统,其中像素坐标可以在事件相机和RGB相机的图像平面之间投影。这种校准用于将通过事件聚类检测到的边界框投影到RGB图像平面中,从而裁剪每个RGB帧而不是进行下采样以满足CNN的要求。使用上下文常见物体(COCO)数据集评估器,RGB场景中自行车类别的平均精度(AP)从21.08提高到了57.38。此外,所有类别的AP从37.93提高到了46.89。为了降低系统延迟,提出了一种新颖的目标检测方法,其中事件相机充当区域提议网络,并在所提议的区域上运行分类算法。这比基线提高了78%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/1f5a7db84a08/sensors-25-04540-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/23164b173700/sensors-25-04540-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/604eb297dbb7/sensors-25-04540-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/41887c156b60/sensors-25-04540-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/932868ba5387/sensors-25-04540-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/1a144adfcfb3/sensors-25-04540-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/7fdafdc4a796/sensors-25-04540-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/dec1fd28ba93/sensors-25-04540-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/9a8ad680b140/sensors-25-04540-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/0ddc8bd3bc6f/sensors-25-04540-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/01c5721f1739/sensors-25-04540-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/b1d0ae4cf535/sensors-25-04540-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/2c2dde428629/sensors-25-04540-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/82b4a18a9da2/sensors-25-04540-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/cf4f4f47abe0/sensors-25-04540-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/c40d90469985/sensors-25-04540-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/bbbb76911c6a/sensors-25-04540-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/b9e622c72734/sensors-25-04540-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/6df466db02dc/sensors-25-04540-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/add616430bc7/sensors-25-04540-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/746f85bdc322/sensors-25-04540-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/6ea0c87483b7/sensors-25-04540-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/9f01178c26b9/sensors-25-04540-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/1f5a7db84a08/sensors-25-04540-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/23164b173700/sensors-25-04540-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/604eb297dbb7/sensors-25-04540-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/41887c156b60/sensors-25-04540-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/932868ba5387/sensors-25-04540-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/1a144adfcfb3/sensors-25-04540-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/7fdafdc4a796/sensors-25-04540-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/dec1fd28ba93/sensors-25-04540-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/9a8ad680b140/sensors-25-04540-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/0ddc8bd3bc6f/sensors-25-04540-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/01c5721f1739/sensors-25-04540-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/b1d0ae4cf535/sensors-25-04540-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/2c2dde428629/sensors-25-04540-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/82b4a18a9da2/sensors-25-04540-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/cf4f4f47abe0/sensors-25-04540-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/c40d90469985/sensors-25-04540-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/bbbb76911c6a/sensors-25-04540-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/b9e622c72734/sensors-25-04540-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/6df466db02dc/sensors-25-04540-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/add616430bc7/sensors-25-04540-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/746f85bdc322/sensors-25-04540-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/6ea0c87483b7/sensors-25-04540-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/9f01178c26b9/sensors-25-04540-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fac/12349060/1f5a7db84a08/sensors-25-04540-g018.jpg

相似文献

1
EPCNet: Implementing an 'Artificial Fovea' for More Efficient Monitoring Using the Sensor Fusion of an Event-Based and a Frame-Based Camera.EPCNet:利用基于事件的相机和基于帧的相机的传感器融合实现“人工中央凹”以进行更高效的监测。
Sensors (Basel). 2025 Jul 22;25(15):4540. doi: 10.3390/s25154540.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Short-Term Memory Impairment短期记忆障碍
4
Regional cerebral blood flow single photon emission computed tomography for detection of Frontotemporal dementia in people with suspected dementia.用于检测疑似痴呆患者额颞叶痴呆的局部脑血流单光子发射计算机断层扫描
Cochrane Database Syst Rev. 2015 Jun 23;2015(6):CD010896. doi: 10.1002/14651858.CD010896.pub2.
5
Integrating computer vision algorithms and RFID system for identification and tracking of group-housed animals: an example with pigs.整合计算机视觉算法和射频识别系统用于群居动物的识别与跟踪:以猪为例。
J Anim Sci. 2024 Jan 3;102. doi: 10.1093/jas/skae174.
6
Electrophoresis电泳
7
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
8
Sexual Harassment and Prevention Training性骚扰与预防培训
9
Integrated neural network framework for multi-object detection and recognition using UAV imagery.用于使用无人机图像进行多目标检测与识别的集成神经网络框架。
Front Neurorobot. 2025 Jul 30;19:1643011. doi: 10.3389/fnbot.2025.1643011. eCollection 2025.
10
Development and Validation of a Convolutional Neural Network Model to Predict a Pathologic Fracture in the Proximal Femur Using Abdomen and Pelvis CT Images of Patients With Advanced Cancer.利用晚期癌症患者腹部和骨盆 CT 图像建立卷积神经网络模型预测股骨近端病理性骨折的研究
Clin Orthop Relat Res. 2023 Nov 1;481(11):2247-2256. doi: 10.1097/CORR.0000000000002771. Epub 2023 Aug 23.

本文引用的文献

1
Impact of ISP Tuning on Object Detection.ISP 调优对目标检测的影响。
J Imaging. 2023 Nov 24;9(12):260. doi: 10.3390/jimaging9120260.
2
ESVIO: Event-Based Stereo Visual-Inertial Odometry.ESVIO:基于事件的立体视觉惯性里程计。
Sensors (Basel). 2023 Feb 10;23(4):1998. doi: 10.3390/s23041998.
3
Real-time face & eye tracking and blink detection using event cameras.使用事件相机进行实时面部和眼部跟踪以及眨眼检测。
Neural Netw. 2021 Sep;141:87-97. doi: 10.1016/j.neunet.2021.03.019. Epub 2021 Mar 27.
4
A review of interactions between peripheral and foveal vision.周边视觉与中央凹视觉相互作用的综述。
J Vis. 2020 Nov 2;20(12):2. doi: 10.1167/jov.20.12.2.
5
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.
6
High Speed and High Dynamic Range Video with an Event Camera.基于事件相机的高速高动态范围视频
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):1964-1980. doi: 10.1109/TPAMI.2019.2963386. Epub 2021 May 11.