• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于全局特征交互和无锚框感知特征调制的无人机目标跟踪方法

UAV target tracking method based on global feature interaction and anchor-frame-free perceptual feature modulation.

作者信息

Dan Yuanhong, Li Jinyan, Jin Yu, Ji Yong, Wang Zhihao, Cheng Dong

机构信息

Colleage of Computer Science and Engineering, Chongqing University of Technology, Chongqing, China.

出版信息

PLoS One. 2025 Jan 16;20(1):e0314485. doi: 10.1371/journal.pone.0314485. eCollection 2025.

DOI:10.1371/journal.pone.0314485
PMID:39820190
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11737744/
Abstract

Target tracking techniques in the UAV perspective utilize UAV cameras to capture video streams and identify and track specific targets in real-time. Deep learning UAV target tracking methods based on the Siamese family have achieved significant results but still face challenges regarding accuracy and speed compatibility. In this study, in order to refine the feature representation and reduce the computational effort to improve the efficiency of the tracker, we perform feature fusion in deep inter-correlation operations and introduce a global attention mechanism to enhance the model's field of view range and feature refinement capability to improve the tracking performance for small targets. In addition, we design an anchor-free frame-aware feature modulation mechanism to reduce computation and generate high-quality anchors while optimizing the target frame refinement computation to improve the adaptability to target deformation motion. Comparison experiments with several popular algorithms on UAV tracking datasets, such as UAV123@10fps, UAV20L, and DTB70, show that the algorithm balances speed and accuracy. In order to verify the reliability of the algorithm, we built a physical experimental environment on the Jetson Orin Nano platform. We realized a real-time processing speed of 30 frames per second.

摘要

无人机视角下的目标跟踪技术利用无人机摄像头捕获视频流,并实时识别和跟踪特定目标。基于暹罗家族的深度学习无人机目标跟踪方法取得了显著成果,但在准确性和速度兼容性方面仍面临挑战。在本研究中,为了优化特征表示并减少计算量以提高跟踪器的效率,我们在深度互相关操作中进行特征融合,并引入全局注意力机制以增强模型的视野范围和特征细化能力,从而提高对小目标的跟踪性能。此外,我们设计了一种无锚框感知特征调制机制,以减少计算量并生成高质量的锚框,同时优化目标框细化计算,以提高对目标变形运动的适应性。在无人机跟踪数据集(如UAV123@10fps、UAV20L和DTB70)上与几种流行算法进行的对比实验表明,该算法在速度和准确性之间取得了平衡。为了验证算法的可靠性,我们在Jetson Orin Nano平台上构建了物理实验环境。我们实现了每秒30帧的实时处理速度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/de7af09a78ad/pone.0314485.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/4ac2180dd04e/pone.0314485.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/b223bd57d9c6/pone.0314485.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/cdeebf720510/pone.0314485.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/b53217c25889/pone.0314485.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/84fbff37f97c/pone.0314485.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/f845b348b37b/pone.0314485.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/4b568b91743c/pone.0314485.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/86d4bfdf171c/pone.0314485.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/3f4c2bbd2463/pone.0314485.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/0d55079cba33/pone.0314485.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/ed6e1dd5440a/pone.0314485.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/a2ea56dee91f/pone.0314485.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/f53f63c26e86/pone.0314485.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/5609a99a0340/pone.0314485.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/de7af09a78ad/pone.0314485.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/4ac2180dd04e/pone.0314485.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/b223bd57d9c6/pone.0314485.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/cdeebf720510/pone.0314485.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/b53217c25889/pone.0314485.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/84fbff37f97c/pone.0314485.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/f845b348b37b/pone.0314485.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/4b568b91743c/pone.0314485.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/86d4bfdf171c/pone.0314485.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/3f4c2bbd2463/pone.0314485.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/0d55079cba33/pone.0314485.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/ed6e1dd5440a/pone.0314485.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/a2ea56dee91f/pone.0314485.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/f53f63c26e86/pone.0314485.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/5609a99a0340/pone.0314485.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/089f/11737744/de7af09a78ad/pone.0314485.g015.jpg

相似文献

1
UAV target tracking method based on global feature interaction and anchor-frame-free perceptual feature modulation.基于全局特征交互和无锚框感知特征调制的无人机目标跟踪方法
PLoS One. 2025 Jan 16;20(1):e0314485. doi: 10.1371/journal.pone.0314485. eCollection 2025.
2
SiamHSFT: A Siamese Network-Based Tracker with Hierarchical Sparse Fusion and Transformer for UAV Tracking.暹罗HSFT:一种基于暹罗网络的无人机跟踪器,具有分层稀疏融合和Transformer技术
Sensors (Basel). 2023 Oct 24;23(21):8666. doi: 10.3390/s23218666.
3
Learning Response-Consistent and Background-Suppressed Correlation Filters for Real-Time UAV Tracking.学习响应一致且背景抑制相关滤波器以实现实时无人机跟踪。
Sensors (Basel). 2023 Mar 9;23(6):2980. doi: 10.3390/s23062980.
4
A multi-target tracking method for UAV monitoring wildlife in Qinghai.一种用于青海无人机监测野生动物的多目标跟踪方法。
PLoS One. 2025 Apr 11;20(4):e0317286. doi: 10.1371/journal.pone.0317286. eCollection 2025.
5
LCFF-Net: A lightweight cross-scale feature fusion network for tiny target detection in UAV aerial imagery.LCFF-Net:一种用于无人机航空影像中微小目标检测的轻量级跨尺度特征融合网络。
PLoS One. 2024 Dec 19;19(12):e0315267. doi: 10.1371/journal.pone.0315267. eCollection 2024.
6
ASG-YOLOv5: Improved YOLOv5 unmanned aerial vehicle remote sensing aerial images scenario for small object detection based on attention and spatial gating.ASG-YOLOv5:基于注意力和空间门控的改进型 YOLOv5 无人机遥感航空图像场景的小目标检测
PLoS One. 2024 Jun 3;19(6):e0298698. doi: 10.1371/journal.pone.0298698. eCollection 2024.
7
A reliable unmanned aerial vehicle multi-ship tracking method.一种可靠的无人机多船跟踪方法。
PLoS One. 2025 Jan 10;20(1):e0316933. doi: 10.1371/journal.pone.0316933. eCollection 2025.
8
Face mask identification with enhanced cuckoo optimization and deep learning-based faster regional neural network.基于增强型布谷鸟优化和深度学习的更快区域神经网络的面部口罩识别。
Sci Rep. 2024 Nov 29;14(1):29719. doi: 10.1038/s41598-024-78746-z.
9
Siam Deep Feature KCF Method and Experimental Study for Pedestrian Tracking.暹罗深度特征 KCF 方法及其在行人跟踪中的实验研究。
Sensors (Basel). 2023 Jan 2;23(1):482. doi: 10.3390/s23010482.
10
Novel Surveillance View: A Novel Benchmark and View-Optimized Framework for Pedestrian Detection from UAV Perspectives.新型监测视角:一种用于从无人机视角进行行人检测的新型基准和视角优化框架。
Sensors (Basel). 2025 Jan 27;25(3):772. doi: 10.3390/s25030772.

引用本文的文献

1
Convolutional transform learning based fusion framework for scale invariant long term target detection and tracking in unmanned aerial vehicles.基于卷积变换学习的融合框架,用于无人机中尺度不变的长期目标检测与跟踪。
Sci Rep. 2025 Aug 2;15(1):28248. doi: 10.1038/s41598-025-09652-1.

本文引用的文献

1
Active Learning for Deep Visual Tracking.深度视觉跟踪的主动学习
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13284-13296. doi: 10.1109/TNNLS.2023.3266837. Epub 2024 Oct 7.
2
SiamMask: A Framework for Fast Online Object Tracking and Segmentation.暹罗面具:一种用于快速在线目标跟踪和分割的框架。
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3072-3089. doi: 10.1109/TPAMI.2022.3172932.
3
Self-Supervised Deep Correlation Tracking.自监督深度相关跟踪
IEEE Trans Image Process. 2021;30:976-985. doi: 10.1109/TIP.2020.3037518. Epub 2020 Dec 9.
4
Multi-Object Portion Tracking in 4D Fluorescence Microscopy Imagery with Deep Feature Maps.利用深度特征图在4D荧光显微镜图像中进行多目标部分跟踪
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2019 Jun;2019:1087-1096. doi: 10.1109/cvprw.2019.00142. Epub 2020 Apr 9.
5
Discriminative Scale Space Tracking.判别尺度空间跟踪。
IEEE Trans Pattern Anal Mach Intell. 2017 Aug;39(8):1561-1575. doi: 10.1109/TPAMI.2016.2609928. Epub 2016 Sep 15.
6
High-Speed Tracking with Kernelized Correlation Filters.基于核相关滤波器的高速跟踪。
IEEE Trans Pattern Anal Mach Intell. 2015 Mar;37(3):583-96. doi: 10.1109/TPAMI.2014.2345390.