• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用FMR-CNN和YOLOv8进行武器检测以加强犯罪预防和安保。

Weapon detection with FMR-CNN and YOLOv8 for enhanced crime prevention and security.

作者信息

P Shanthi, V Manjula

机构信息

School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600 127, India.

出版信息

Sci Rep. 2025 Jul 23;15(1):26766. doi: 10.1038/s41598-025-07782-0.

DOI:10.1038/s41598-025-07782-0
PMID:40701971
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12287308/
Abstract

In modern days, increasing weapon-related threats in public places have created an immediate need for intelligent surveillance systems to detect crime in real-time. Traditional surveillance systems have struggles with recognizing small objects, occlusion, and the time it takes to respond, which makes them ineffective in crowded and fast-changing situations. To overcome these challenges, the suggested system combines closed-circuit television (CCTV) surveillance cameras with advanced deep learning methods, image processing, and computer vision techniques for real-time crime prediction and prevention. This study proposes a hybrid deep learning framework that merges a Faster region convolutional neural network and Mask Region Convolutional Neural Network, named FMR-CNN. The novel approach FMR-CNN represents a significant advancement towards improving object recognition and segmentation of images and videos. It has been combined with YOLOv8 to increase the real-time detection speed and localization accuracy significantly. Such a combination enables the concurrent utilization of high-resolution spatial context information and rapid frame-wise predictions, thus making it well-suited for continuous video surveillance tasks. The model was trained and tested on a five labeled class annotated dataset, where MobileNetV3 features are extracted to simulate real-world surveillance conditions. Experimental results show the hybrid model attains detection accuracy of 98.7%, average precision (AP) of 90.1, and speed of 9.2 frames per second (FPS), and generalizes to varied lighting, occlusion, object scales, and reduced computational complexity, making it highly effective for crime prevention. Using these models benefits police departments and law enforcement agencies, as it allows them to detect criminal offenses earlier and avoid untoward situations.

摘要

在现代,公共场所与武器相关的威胁日益增加,这使得迫切需要智能监控系统来实时检测犯罪行为。传统监控系统在识别小物体、遮挡情况以及响应时间方面存在困难,这使得它们在拥挤且快速变化的场景中效果不佳。为了克服这些挑战,所建议的系统将闭路电视(CCTV)监控摄像头与先进的深度学习方法、图像处理和计算机视觉技术相结合,以进行实时犯罪预测和预防。本研究提出了一种混合深度学习框架,该框架融合了更快区域卷积神经网络和掩码区域卷积神经网络,名为FMR-CNN。新颖的FMR-CNN方法在提高图像和视频的目标识别与分割方面取得了显著进展。它已与YOLOv8相结合,以显著提高实时检测速度和定位精度。这种结合能够同时利用高分辨率空间上下文信息和快速逐帧预测,因此非常适合连续视频监控任务。该模型在一个带有五个标注类别的数据集上进行训练和测试,在该数据集中提取MobileNetV3特征以模拟现实世界的监控条件。实验结果表明,该混合模型的检测准确率达到98.7%,平均精度(AP)为90.1,速度为每秒9.2帧(FPS),并且能够推广到不同的光照、遮挡、目标尺度情况,同时降低了计算复杂度,使其在预防犯罪方面非常有效。使用这些模型对警察部门和执法机构有益,因为这使他们能够更早地检测到刑事犯罪并避免不良情况的发生。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/3de634c553c7/41598_2025_7782_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/27ce1831b77c/41598_2025_7782_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/8f434092d686/41598_2025_7782_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b5e88d5d65f1/41598_2025_7782_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b79552ac4d39/41598_2025_7782_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b2a89ed38502/41598_2025_7782_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/75b13a53af91/41598_2025_7782_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/7f43c2201042/41598_2025_7782_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/f8ad90ef52b6/41598_2025_7782_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/cd9c9ea7d4f0/41598_2025_7782_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/a2d7d625067e/41598_2025_7782_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/8c28edf67fce/41598_2025_7782_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/e60f4ac4b08d/41598_2025_7782_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/83b3d5d6f45e/41598_2025_7782_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/6ac927b9a7a9/41598_2025_7782_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/3de634c553c7/41598_2025_7782_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/27ce1831b77c/41598_2025_7782_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/8f434092d686/41598_2025_7782_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b5e88d5d65f1/41598_2025_7782_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b79552ac4d39/41598_2025_7782_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/b2a89ed38502/41598_2025_7782_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/75b13a53af91/41598_2025_7782_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/7f43c2201042/41598_2025_7782_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/f8ad90ef52b6/41598_2025_7782_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/cd9c9ea7d4f0/41598_2025_7782_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/a2d7d625067e/41598_2025_7782_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/8c28edf67fce/41598_2025_7782_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/e60f4ac4b08d/41598_2025_7782_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/83b3d5d6f45e/41598_2025_7782_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/6ac927b9a7a9/41598_2025_7782_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f36/12287308/3de634c553c7/41598_2025_7782_Fig14_HTML.jpg

相似文献

1
Weapon detection with FMR-CNN and YOLOv8 for enhanced crime prevention and security.使用FMR-CNN和YOLOv8进行武器检测以加强犯罪预防和安保。
Sci Rep. 2025 Jul 23;15(1):26766. doi: 10.1038/s41598-025-07782-0.
2
Development and Validation of a Convolutional Neural Network Model to Predict a Pathologic Fracture in the Proximal Femur Using Abdomen and Pelvis CT Images of Patients With Advanced Cancer.利用晚期癌症患者腹部和骨盆 CT 图像建立卷积神经网络模型预测股骨近端病理性骨折的研究
Clin Orthop Relat Res. 2023 Nov 1;481(11):2247-2256. doi: 10.1097/CORR.0000000000002771. Epub 2023 Aug 23.
3
Integrating computer vision algorithms and RFID system for identification and tracking of group-housed animals: an example with pigs.整合计算机视觉算法和射频识别系统用于群居动物的识别与跟踪:以猪为例。
J Anim Sci. 2024 Jan 3;102. doi: 10.1093/jas/skae174.
4
The Black Book of Psychotropic Dosing and Monitoring.《精神药物剂量与监测黑皮书》
Psychopharmacol Bull. 2024 Jul 8;54(3):8-59.
5
Short-Term Memory Impairment短期记忆障碍
6
Integrated deep learning framework for driver distraction detection and real-time road object recognition in advanced driver assistance systems.用于高级驾驶辅助系统中驾驶员分心检测和实时道路物体识别的集成深度学习框架。
Sci Rep. 2025 Jul 11;15(1):25125. doi: 10.1038/s41598-025-08475-4.
7
An improved YOLOv5 method for accurate recognition of grazing sheep activities: active, inactive, ruminating behaviors.一种用于准确识别放牧绵羊活动的改进YOLOv5方法:活跃、不活跃、反刍行为。
J Anim Sci. 2025 Jan 4;103. doi: 10.1093/jas/skaf084.
8
[Volume and health outcomes: evidence from systematic reviews and from evaluation of Italian hospital data].[容量与健康结果:来自系统评价和意大利医院数据评估的证据]
Epidemiol Prev. 2013 Mar-Jun;37(2-3 Suppl 2):1-100.
9
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
10
Deep Learning Models for Detection and Severity Assessment of Cercospora Leaf Spot () in Chili Peppers Under Natural Conditions.自然条件下用于辣椒尾孢叶斑病检测与严重程度评估的深度学习模型
Plants (Basel). 2025 Jul 1;14(13):2011. doi: 10.3390/plants14132011.

本文引用的文献

1
Deep BiLSTM Attention Model for Spatial and Temporal Anomaly Detection in Video Surveillance.用于视频监控中时空异常检测的深度双向长短期记忆注意力模型
Sensors (Basel). 2025 Jan 4;25(1):251. doi: 10.3390/s25010251.
2
Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment.使用YOLOv3和更快的R-CNN模型进行口罩检测:COVID-19环境
Multimed Tools Appl. 2021;80(13):19753-19768. doi: 10.1007/s11042-021-10711-8. Epub 2021 Mar 1.