• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DSiam-CnK:一种用于动态和遮挡场景中人体跟踪的启用CBAM和KCF的深度暹罗区域提议网络。

DSiam-CnK: A CBAM- and KCF-Enabled Deep Siamese Region Proposal Network for Human Tracking in Dynamic and Occluded Scenes.

作者信息

Liu Xiangpeng, Han Jianjiao, Peng Yulin, Liang Qiao, An Kang, He Fengqin, Cheng Yuhua

机构信息

College of Information, Mechanical & Electrical Engineering, Shanghai Normal University, 100 Haisi Road, Shanghai 201418, China.

Shanghai Research Institute of Microelectronics, Peking University, Shanghai 201203, China.

出版信息

Sensors (Basel). 2024 Dec 21;24(24):8176. doi: 10.3390/s24248176.

DOI:10.3390/s24248176
PMID:39771910
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11679421/
Abstract

Despite the accuracy and robustness attained in the field of object tracking, algorithms based on Siamese neural networks often over-rely on information from the initial frame, neglecting necessary updates to the template; furthermore, in prolonged tracking situations, such methodologies encounter challenges in efficiently addressing issues such as complete occlusion or instances where the target exits the frame. To tackle these issues, this study enhances the SiamRPN algorithm by integrating the convolutional block attention module (CBAM), which enhances spatial channel attention. Additionally, it integrates the kernelized correlation filters (KCFs) for enhanced feature template representation. Building on this, we present DSiam-CnK, a Siamese neural network with dynamic template updating capabilities, facilitating adaptive adjustments in tracking strategy. The proposed algorithm is tailored to elevate the Siamese neural network's accuracy and robustness for prolonged tracking, all the while preserving its tracking velocity. In our research, we assessed the performance on the OTB2015, VOT2018, and LaSOT datasets. Our method, when benchmarked against established trackers, including SiamRPN on OTB2015, achieved a success rate of 92.1% and a precision rate of 90.9%. On the VOT2018 dataset, it excelled, with a VOT-A (accuracy) of 46.7%, a VOT-R (robustness) of 135.3%, and a VOT-EAO (expected average overlap) of 26.4%, leading in all categories. On the LaSOT dataset, it achieved a precision of 35.3%, a normalized precision of 34.4%, and a success rate of 39%. The findings demonstrate enhanced precision in tracking performance and a notable increase in robustness with our method.

摘要

尽管在目标跟踪领域已经取得了准确性和鲁棒性,但基于暹罗神经网络的算法往往过度依赖初始帧的信息,而忽略了对模板进行必要的更新;此外,在长时间跟踪的情况下,此类方法在有效解决诸如完全遮挡或目标离开帧的情况等问题时会遇到挑战。为了解决这些问题,本研究通过集成增强空间通道注意力的卷积块注意力模块(CBAM)来增强SiamRPN算法。此外,它还集成了核相关滤波器(KCF)以增强特征模板表示。在此基础上,我们提出了DSiam-CnK,一种具有动态模板更新能力的暹罗神经网络,便于在跟踪策略中进行自适应调整。所提出的算法旨在提高暹罗神经网络在长时间跟踪中的准确性和鲁棒性,同时保持其跟踪速度。在我们的研究中,我们在OTB2015、VOT2018和LaSOT数据集上评估了性能。与包括OTB2015上的SiamRPN在内的既定跟踪器相比,我们的方法成功率达到92.1%,精确率达到90.9%。在VOT2018数据集上,它表现出色,VOT-A(准确率)为46.7%,VOT-R(鲁棒性)为135.3%,VOT-EAO(预期平均重叠率)为26.4%,在所有类别中领先。在LaSOT数据集上,它的精确率为35.3%,归一化精确率为34.4%,成功率为39%。研究结果表明,我们的方法在跟踪性能上提高了精度,在鲁棒性上有显著提升。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/c47c7869cb76/sensors-24-08176-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/632376bf633b/sensors-24-08176-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/98743bc811b5/sensors-24-08176-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/4dc0e6366931/sensors-24-08176-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/575e87c11be7/sensors-24-08176-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/857f60ad68f9/sensors-24-08176-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/0108d15547c4/sensors-24-08176-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/cde4f50ad242/sensors-24-08176-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/9752e1c332b7/sensors-24-08176-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/d9e42a265482/sensors-24-08176-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/70f162537059/sensors-24-08176-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/a9e277c3ae69/sensors-24-08176-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/1e6d6dc57cdc/sensors-24-08176-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/e41b5905f3b3/sensors-24-08176-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/13249ad3cc18/sensors-24-08176-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/c47c7869cb76/sensors-24-08176-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/632376bf633b/sensors-24-08176-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/98743bc811b5/sensors-24-08176-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/4dc0e6366931/sensors-24-08176-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/575e87c11be7/sensors-24-08176-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/857f60ad68f9/sensors-24-08176-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/0108d15547c4/sensors-24-08176-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/cde4f50ad242/sensors-24-08176-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/9752e1c332b7/sensors-24-08176-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/d9e42a265482/sensors-24-08176-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/70f162537059/sensors-24-08176-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/a9e277c3ae69/sensors-24-08176-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/1e6d6dc57cdc/sensors-24-08176-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/e41b5905f3b3/sensors-24-08176-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/13249ad3cc18/sensors-24-08176-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2272/11679421/c47c7869cb76/sensors-24-08176-g015.jpg

相似文献

1
DSiam-CnK: A CBAM- and KCF-Enabled Deep Siamese Region Proposal Network for Human Tracking in Dynamic and Occluded Scenes.DSiam-CnK:一种用于动态和遮挡场景中人体跟踪的启用CBAM和KCF的深度暹罗区域提议网络。
Sensors (Basel). 2024 Dec 21;24(24):8176. doi: 10.3390/s24248176.
2
A Siamese tracker with "dynamic-static" dual-template fusion and dynamic template adaptive update.一种具有“动态-静态”双模板融合及动态模板自适应更新的暹罗跟踪器。
Front Neurorobot. 2023 Jan 11;16:1094892. doi: 10.3389/fnbot.2022.1094892. eCollection 2022.
3
Siamese network with a depthwise over-parameterized convolutional layer for visual tracking.用于视觉跟踪的带有深度过度参数化卷积层的暹罗网络。
PLoS One. 2022 Aug 31;17(8):e0273690. doi: 10.1371/journal.pone.0273690. eCollection 2022.
4
Object Relocation Visual Tracking Based on Histogram Filter and Siamese Network in Intelligent Transportation.基于直方图滤波和孪生网络的智能交通中目标重定位视觉跟踪。
Sensors (Basel). 2022 Nov 8;22(22):8591. doi: 10.3390/s22228591.
5
Three-stage cascade architecture-based siamese sliding window network algorithm for object tracking.基于三阶段级联架构的暹罗滑动窗口网络算法用于目标跟踪。
Heliyon. 2025 Jan 6;11(2):e41612. doi: 10.1016/j.heliyon.2024.e41612. eCollection 2025 Jan 30.
6
TGAN: A simple model update strategy for visual tracking via template-guidance attention network.TGAN:一种基于模板引导注意力网络的简单视觉跟踪模型更新策略。
Neural Netw. 2021 Dec;144:61-74. doi: 10.1016/j.neunet.2021.08.010. Epub 2021 Aug 16.
7
Siamese Implicit Region Proposal Network With Compound Attention for Visual Tracking.基于复合注意力的暹罗隐式区域提案网络的视觉跟踪。
IEEE Trans Image Process. 2022;31:1882-1894. doi: 10.1109/TIP.2022.3148876. Epub 2022 Feb 16.
8
Lightweight Siamese Network with Global Correlation for Single-Object Tracking.用于单目标跟踪的具有全局相关性的轻量级暹罗网络。
Sensors (Basel). 2024 Dec 21;24(24):8171. doi: 10.3390/s24248171.
9
Siam Deep Feature KCF Method and Experimental Study for Pedestrian Tracking.暹罗深度特征 KCF 方法及其在行人跟踪中的实验研究。
Sensors (Basel). 2023 Jan 2;23(1):482. doi: 10.3390/s23010482.
10
Learning Geometry Information of Target for Visual Object Tracking with Siamese Networks.利用连体网络学习视觉目标跟踪中目标的几何信息。
Sensors (Basel). 2021 Nov 23;21(23):7790. doi: 10.3390/s21237790.

本文引用的文献

1
Small Object Detection and Tracking: A Comprehensive Review.小目标检测与跟踪:全面综述
Sensors (Basel). 2023 Aug 3;23(15):6887. doi: 10.3390/s23156887.
2
Deep learning and SURF for automated classification and detection of calcaneus fractures in CT images.深度学习和 SURF 用于 CT 图像中跟骨骨折的自动分类和检测。
Comput Methods Programs Biomed. 2019 Apr;171:27-37. doi: 10.1016/j.cmpb.2019.02.006. Epub 2019 Feb 12.
3
Discriminative Scale Space Tracking.判别尺度空间跟踪。
IEEE Trans Pattern Anal Mach Intell. 2017 Aug;39(8):1561-1575. doi: 10.1109/TPAMI.2016.2609928. Epub 2016 Sep 15.
4
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.