• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

暹罗高效特征变换:用于红、绿、蓝、红外多域目标跟踪的自适应时间特征提取混合网络

SiamEFT: adaptive-time feature extraction hybrid network for RGBE multi-domain object tracking.

作者信息

Liu Shuqi, Wang Gang, Song Yong, Huang Jinxiang, Huang Yiqian, Zhou Ya, Wang Shiqiang

机构信息

School of Optics and Photonics, Beijing Institute of Technology, Beijing, China.

Center of Brain Sciences, Beijing Institute of Basic Medical Sciencesy, Beijing, China.

出版信息

Front Neurosci. 2024 Aug 8;18:1453419. doi: 10.3389/fnins.2024.1453419. eCollection 2024.

DOI:10.3389/fnins.2024.1453419
PMID:39176387
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11338902/
Abstract

Integrating RGB and Event (RGBE) multi-domain information obtained by high-dynamic-range and temporal-resolution event cameras has been considered an effective scheme for robust object tracking. However, existing RGBE tracking methods have overlooked the unique spatio-temporal features over different domains, leading to object tracking failure and inefficiency, especally for objects against complex backgrounds. To address this problem, we propose a novel tracker based on adaptive-time feature extraction hybrid networks, namely Siamese Event Frame Tracker (SiamEFT), which focuses on the effective representation and utilization of the diverse spatio-temporal features of RGBE. We first design an adaptive-time attention module to aggregate event data into frames based on adaptive-time weights to enhance information representation. Subsequently, the SiamEF module and cross-network fusion module combining artificial neural networks and spiking neural networks hybrid network are designed to effectively extract and fuse the spatio-temporal features of RGBE. Extensive experiments on two RGBE datasets (VisEvent and COESOT) show that the SiamEFT achieves a success rate of 0.456 and 0.574, outperforming the state-of-the-art competing methods and exhibiting a 2.3-fold enhancement in efficiency. These results validate the superior accuracy and efficiency of SiamEFT in diverse and challenging scenes.

摘要

融合由高动态范围和时间分辨率事件相机获得的RGB和事件(RGBE)多域信息,已被视为一种用于鲁棒目标跟踪的有效方案。然而,现有的RGBE跟踪方法忽略了不同域上独特的时空特征,导致目标跟踪失败和效率低下,尤其是对于处于复杂背景下的目标。为了解决这个问题,我们提出了一种基于自适应时间特征提取混合网络的新型跟踪器,即暹罗事件帧跟踪器(SiamEFT),它专注于RGBE多样时空特征的有效表示和利用。我们首先设计了一个自适应时间注意力模块,基于自适应时间权重将事件数据聚合到帧中,以增强信息表示。随后,设计了暹罗事件帧跟踪(SiamEF)模块和结合人工神经网络与脉冲神经网络的跨网络融合模块的混合网络,以有效提取和融合RGBE的时空特征。在两个RGBE数据集(VisEvent和COESOT)上进行的大量实验表明,SiamEFT的成功率分别为0.456和0.574,优于当前最先进的竞争方法,并且效率提高了2.3倍。这些结果验证了SiamEFT在多样且具有挑战性的场景中的卓越准确性和效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/aac5e8101963/fnins-18-1453419-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/abf2da94506a/fnins-18-1453419-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/c64c33d69be6/fnins-18-1453419-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/b71115aae2f6/fnins-18-1453419-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/1335d8ed83cd/fnins-18-1453419-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/3e7db664479e/fnins-18-1453419-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/2a2b56809e99/fnins-18-1453419-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/789d1e458dae/fnins-18-1453419-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/aac5e8101963/fnins-18-1453419-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/abf2da94506a/fnins-18-1453419-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/c64c33d69be6/fnins-18-1453419-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/b71115aae2f6/fnins-18-1453419-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/1335d8ed83cd/fnins-18-1453419-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/3e7db664479e/fnins-18-1453419-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/2a2b56809e99/fnins-18-1453419-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/789d1e458dae/fnins-18-1453419-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8d5/11338902/aac5e8101963/fnins-18-1453419-g0008.jpg

相似文献

1
SiamEFT: adaptive-time feature extraction hybrid network for RGBE multi-domain object tracking.暹罗高效特征变换:用于红、绿、蓝、红外多域目标跟踪的自适应时间特征提取混合网络
Front Neurosci. 2024 Aug 8;18:1453419. doi: 10.3389/fnins.2024.1453419. eCollection 2024.
2
Reliable object tracking by multimodal hybrid feature extraction and transformer-based fusion.基于多模态混合特征提取和基于Transformer的融合的可靠目标跟踪
Neural Netw. 2024 Oct;178:106493. doi: 10.1016/j.neunet.2024.106493. Epub 2024 Jun 27.
3
RGBE-Gaze: A Large-Scale Event-Based Multimodal Dataset for High Frequency Remote Gaze Tracking.RGBE-注视:用于高频远程注视跟踪的大规模基于事件的多模态数据集。
IEEE Trans Pattern Anal Mach Intell. 2025 Jan;47(1):601-615. doi: 10.1109/TPAMI.2024.3474858. Epub 2024 Dec 4.
4
HROM: Learning High-Resolution Representation and Object-Aware Masks for Visual Object Tracking.HROM:用于视觉目标跟踪的学习高分辨率表示和对象感知掩模。
Sensors (Basel). 2020 Aug 26;20(17):4807. doi: 10.3390/s20174807.
5
Cross-Modal Object Tracking via Modality-Aware Fusion Network and a Large-Scale Dataset.通过模态感知融合网络和大规模数据集实现跨模态目标跟踪
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6981-6994. doi: 10.1109/TNNLS.2024.3406189. Epub 2025 Apr 8.
6
SCTN: Event-based object tracking with energy-efficient deep convolutional spiking neural networks.SCTN:基于事件的目标跟踪与节能深度卷积脉冲神经网络
Front Neurosci. 2023 Feb 16;17:1123698. doi: 10.3389/fnins.2023.1123698. eCollection 2023.
7
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking.EVtracker:一种用于动态目标跟踪的事件驱动时空方法。
Sensors (Basel). 2022 Aug 15;22(16):6090. doi: 10.3390/s22166090.
8
Robust RGB-T Tracking via Graph Attention-Based Bilinear Pooling.基于图注意力的双线性池化实现稳健的RGB-T跟踪
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):9900-9911. doi: 10.1109/TNNLS.2022.3161969. Epub 2023 Nov 30.
9
SiamHYPER: Learning a Hyperspectral Object Tracker From an RGB-Based Tracker.暹罗超光谱跟踪器:从基于RGB的跟踪器学习高光谱目标跟踪器
IEEE Trans Image Process. 2022;31:7116-7129. doi: 10.1109/TIP.2022.3216995. Epub 2022 Nov 16.
10
SiamHAS: Siamese Tracker with Hierarchical Attention Strategy for Aerial Tracking.暹罗HAS:用于空中跟踪的具有分层注意力策略的暹罗跟踪器。
Micromachines (Basel). 2023 Apr 21;14(4):893. doi: 10.3390/mi14040893.

本文引用的文献

1
Editorial: Theoretical advances and practical applications of spiking neural networks.社论:脉冲神经网络的理论进展与实际应用
Front Neurosci. 2024 Apr 22;18:1406502. doi: 10.3389/fnins.2024.1406502. eCollection 2024.
2
VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows.视觉事件:通过帧流与事件流协作实现可靠目标跟踪
IEEE Trans Cybern. 2024 Mar;54(3):1997-2010. doi: 10.1109/TCYB.2023.3318601. Epub 2024 Feb 9.
3
SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence.
SpikingJelly:一个用于基于尖峰的智能的开源机器学习基础架构平台。
Sci Adv. 2023 Oct 6;9(40):eadi1480. doi: 10.1126/sciadv.adi1480.
4
Direct training high-performance spiking neural networks for object recognition and detection.用于目标识别与检测的直接训练高性能脉冲神经网络
Front Neurosci. 2023 Aug 8;17:1229951. doi: 10.3389/fnins.2023.1229951. eCollection 2023.
5
A framework for the general design and computation of hybrid neural networks.一种混合神经网络的通用设计和计算框架。
Nat Commun. 2022 Jun 14;13(1):3427. doi: 10.1038/s41467-022-30964-7.
6
Deep Learning in Visual Tracking: A Review.视觉跟踪中的深度学习:综述
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5497-5516. doi: 10.1109/TNNLS.2021.3136907. Epub 2023 Sep 1.
7
A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection.用于节能目标检测的全尖峰混合神经网络。
IEEE Trans Image Process. 2021;30:9014-9029. doi: 10.1109/TIP.2021.3122092. Epub 2021 Nov 2.
8
RGBT Tracking via Multi-Adapter Network with Hierarchical Divergence Loss.基于具有分层散度损失的多适配器网络的RGBT跟踪
IEEE Trans Image Process. 2021;30:5613-5625. doi: 10.1109/TIP.2021.3087341. Epub 2021 Jun 18.
9
Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks.基于深度神经网络的人类步态识别的事件流表示。
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3436-3449. doi: 10.1109/TPAMI.2021.3054886. Epub 2022 Jun 3.
10
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.