• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

STAM-CCF:基于相关滤波器的多摄像头可疑跟踪

STAM-CCF: Suspicious Tracking Across Multiple Camera Based on Correlation Filters.

作者信息

Sheu Ruey-Kai, Pardeshi Mayuresh, Chen Lun-Chi, Yuan Shyan-Ming

机构信息

Department of Computer Science, Tunghai University, Taichung 40704, Taiwan.

Electrical Engineering and Computer Science Department (EECS-IGP), National Chiao Tung University, Hsinchu 30010, Taiwan.

出版信息

Sensors (Basel). 2019 Jul 9;19(13):3016. doi: 10.3390/s19133016.

DOI:10.3390/s19133016
PMID:31323987
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6651151/
Abstract

There is strong demand for real-time suspicious tracking across multiple cameras in intelligent video surveillance for public areas, such as universities, airports and factories. Most criminal events show that the nature of suspicious behavior are carried out by un-known people who try to hide themselves as much as possible. Previous learning-based studies collected a large volume data set to train a learning model to detect humans across multiple cameras but failed to recognize newcomers. There are also several feature-based studies aimed to identify humans within-camera tracking. It would be very difficult for those methods to get necessary feature information in multi-camera scenarios and scenes. It is the purpose of this study to design and implement a suspicious tracking mechanism across multiple cameras based on correlation filters, called suspicious tracking across multiple cameras based on correlation filters (STAM-CCF). By leveraging the geographical information of cameras and YOLO object detection framework, STAM-CCF adjusts human identification and prevents errors caused by information loss in case of object occlusion and overlapping for within-camera tracking cases. STAM-CCF also introduces a camera correlation model and a two-stage gait recognition strategy to deal with problems of re-identification across multiple cameras. Experimental results show that the proposed method performs well with highly acceptable accuracy. The evidences also show that the proposed STAM-CCF method can continuously recognize suspicious behavior within-camera tracking and re-identify it successfully across multiple cameras.

摘要

在诸如大学、机场和工厂等公共场所的智能视频监控中,对跨多个摄像头的实时可疑跟踪有强烈需求。大多数犯罪事件表明,可疑行为的实施者往往是试图尽可能隐藏自己的不明人员。以往基于学习的研究收集了大量数据集来训练学习模型以跨多个摄像头检测人员,但未能识别新出现的人员。也有一些基于特征的研究旨在在摄像头内跟踪中识别人员。对于那些方法来说,在多摄像头场景和画面中获取必要的特征信息将非常困难。本研究的目的是设计并实现一种基于相关滤波器的跨多个摄像头的可疑跟踪机制,称为基于相关滤波器的跨多个摄像头可疑跟踪(STAM-CCF)。通过利用摄像头的地理信息和YOLO目标检测框架,STAM-CCF调整人员识别,并防止在摄像头内跟踪情况下因目标遮挡和重叠导致信息丢失而产生的错误。STAM-CCF还引入了摄像头相关模型和两阶段步态识别策略来处理跨多个摄像头的重新识别问题。实验结果表明,所提出的方法以高度可接受的准确率表现良好。证据还表明,所提出的STAM-CCF方法能够在摄像头内跟踪中持续识别可疑行为,并在多个摄像头之间成功重新识别。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/84dc76c13ddf/sensors-19-03016-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/b4b03357e769/sensors-19-03016-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/0bea4f1b9fc7/sensors-19-03016-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/71f5a8a521bb/sensors-19-03016-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/f09a617faa15/sensors-19-03016-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/7a5e48cb6cbb/sensors-19-03016-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/3d8578d5605e/sensors-19-03016-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/a069e3965ab6/sensors-19-03016-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/9df997bce727/sensors-19-03016-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/7f9ad837054b/sensors-19-03016-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/1cabd776af80/sensors-19-03016-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/1dcd9cd6e298/sensors-19-03016-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/acda772fde57/sensors-19-03016-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/61b4b6b84f22/sensors-19-03016-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/30cbe6dd888e/sensors-19-03016-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/e9b7c3ea189b/sensors-19-03016-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/52768d9197be/sensors-19-03016-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/93bf33d84b3c/sensors-19-03016-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/84dc76c13ddf/sensors-19-03016-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/b4b03357e769/sensors-19-03016-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/0bea4f1b9fc7/sensors-19-03016-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/71f5a8a521bb/sensors-19-03016-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/f09a617faa15/sensors-19-03016-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/7a5e48cb6cbb/sensors-19-03016-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/3d8578d5605e/sensors-19-03016-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/a069e3965ab6/sensors-19-03016-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/9df997bce727/sensors-19-03016-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/7f9ad837054b/sensors-19-03016-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/1cabd776af80/sensors-19-03016-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/1dcd9cd6e298/sensors-19-03016-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/acda772fde57/sensors-19-03016-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/61b4b6b84f22/sensors-19-03016-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/30cbe6dd888e/sensors-19-03016-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/e9b7c3ea189b/sensors-19-03016-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/52768d9197be/sensors-19-03016-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/93bf33d84b3c/sensors-19-03016-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/551b/6651151/84dc76c13ddf/sensors-19-03016-g018.jpg

相似文献

1
STAM-CCF: Suspicious Tracking Across Multiple Camera Based on Correlation Filters.STAM-CCF:基于相关滤波器的多摄像头可疑跟踪
Sensors (Basel). 2019 Jul 9;19(13):3016. doi: 10.3390/s19133016.
2
NCA-Net for Tracking Multiple Objects across Multiple Cameras.跨多相机的多目标跟踪 NCA-Net。
Sensors (Basel). 2018 Oct 11;18(10):3400. doi: 10.3390/s18103400.
3
Visual Tracking with Multiview Trajectory Prediction.基于多视图轨迹预测的视觉跟踪
IEEE Trans Image Process. 2020 Aug 13;PP. doi: 10.1109/TIP.2020.3014952.
4
Machine-Learning-Based Real-Time Multi-Camera Vehicle Tracking and Travel-Time Estimation.基于机器学习的实时多摄像头车辆跟踪与行程时间估计
J Imaging. 2022 Apr 6;8(4):101. doi: 10.3390/jimaging8040101.
5
Low-cost intelligent surveillance system based on fast CNN.基于快速卷积神经网络的低成本智能监控系统。
PeerJ Comput Sci. 2021 Feb 25;7:e402. doi: 10.7717/peerj-cs.402. eCollection 2021.
6
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.使用多个视频监控摄像头进行人类检索的归一化元数据生成
Sensors (Basel). 2016 Jun 24;16(7):963. doi: 10.3390/s16070963.
7
Identification and Tracking of Vehicles between Multiple Cameras on Bridges Using a YOLOv4 and OSNet-Based Method.基于 YOLOv4 和 OSNet 的桥梁多摄像机车辆目标识别与跟踪方法
Sensors (Basel). 2023 Jun 12;23(12):5510. doi: 10.3390/s23125510.
8
Multi-Camera Multi-Person Tracking and Re-Identification in an Operating Room.手术室中的多摄像头多人跟踪与再识别
J Imaging. 2022 Aug 17;8(8):219. doi: 10.3390/jimaging8080219.
9
Multi-Camera Vehicle Tracking Using Edge Computing and Low-Power Communication.基于边缘计算和低功耗通信的多摄像头车辆跟踪
Sensors (Basel). 2020 Jun 11;20(11):3334. doi: 10.3390/s20113334.
10
Revisiting Person Re-Identification by Camera Selection.通过相机选择重新审视行人重识别
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2692-2708. doi: 10.1109/TPAMI.2023.3324374. Epub 2024 Apr 3.

引用本文的文献

1
Extrinsic Camera Calibration with Line-Laser Projection.基于线激光投影的外部相机校准
Sensors (Basel). 2021 Feb 5;21(4):1091. doi: 10.3390/s21041091.

本文引用的文献

1
OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields.OpenPose:基于部件亲和力字段的实时多人 2D 姿态估计。
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):172-186. doi: 10.1109/TPAMI.2019.2929257. Epub 2020 Dec 4.
2
Training-Based Methods for Comparison of Object Detection Methods for Visual Object Tracking.基于训练的方法用于视觉目标跟踪中目标检测方法的比较。
Sensors (Basel). 2018 Nov 16;18(11):3994. doi: 10.3390/s18113994.
3
NCA-Net for Tracking Multiple Objects across Multiple Cameras.跨多相机的多目标跟踪 NCA-Net。
Sensors (Basel). 2018 Oct 11;18(10):3400. doi: 10.3390/s18103400.