• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于进化方法的监控视频关键帧提取算法

Key frame extraction algorithm for surveillance videos using an evolutionary approach.

作者信息

Rajan Manjusha, Parameswaran Latha

机构信息

Department of Computer Science and Engineering, Amrita School of Computing, Coimbatore, Amrita Vishwa Vidyapeetham, India, 641112.

出版信息

Sci Rep. 2025 Jan 2;15(1):536. doi: 10.1038/s41598-024-84324-0.

DOI:10.1038/s41598-024-84324-0
PMID:39748027
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11696284/
Abstract

With rapid technological advancements, videos are captured, stored, and shared in multiple formats, increasing the requirement for summarization techniques to enable shorter viewing durations. Key Frame Extraction (KFE) algorithms are crucial in video summarization, compression, and offline analysis. This study aims to develop an efficient KFE approach for generic videos. Existing methods include the Adaptive Key Frame Extraction Algorithm, which reduces redundancy while ensuring maximum content coverage; the Optimal Key Frame Extraction Algorithm, which utilizes a Genetic Algorithm (GA) to select key frames optimally; and the Rapid Key Frame Extraction Algorithm, which employs clustering techniques to identify typical key frames. However, a clear prerequisite remains for a more versatile KFE technique that can address generic applications rather than specific use cases. Evolutionary algorithms offer a powerful solution for achieving optimal KFE. This proposed method leverages an interactive GA with a well-designed Fitness Function and elitism-based survivor selection to enhance performance. This proposed algorithm has been tested on diverse datasets, including VSUMM, SumMe, Mall, user-generated videos, surveillance footage from Amrita Vishwa Vidyapeetham University (Coimbatore, India), and web-sourced videos. The results demonstrate that the proposed KFE approach adheres to benchmark data and captures additional significant frames. Compared to Differential Evolution (DE) techniques and Deep Learning (DL) models from the literature, this recommended algorithm demonstrates superior efficiency, as verified through quantitative and qualitative evaluation metrics. Furthermore, the computational complexity of the GA is intricately compared to that of DE and DL-based approaches, highlighting the distinct efficiencies and performance features.

摘要

随着技术的飞速发展,视频以多种格式被捕获、存储和共享,这增加了对总结技术的需求,以便能够缩短观看时长。关键帧提取(KFE)算法在视频总结、压缩和离线分析中至关重要。本研究旨在为通用视频开发一种高效的KFE方法。现有方法包括自适应关键帧提取算法,该算法在确保最大内容覆盖的同时减少冗余;最优关键帧提取算法,该算法利用遗传算法(GA)来优化选择关键帧;以及快速关键帧提取算法,该算法采用聚类技术来识别典型关键帧。然而,对于一种能够处理通用应用而非特定用例的更通用的KFE技术,仍然存在明确的前提条件。进化算法为实现最优KFE提供了一个强大的解决方案。所提出的方法利用具有精心设计的适应度函数和基于精英主义的幸存者选择的交互式GA来提高性能。所提出的算法已经在各种数据集上进行了测试,包括VSUMM、SumMe、Mall、用户生成的视频、印度阿姆瑞塔维什瓦维迪亚佩特姆大学(哥印拜陀)的监控录像以及网络来源的视频。结果表明,所提出的KFE方法符合基准数据并捕获了额外的重要帧。与文献中的差分进化(DE)技术和深度学习(DL)模型相比,通过定量和定性评估指标验证,该推荐算法具有更高的效率。此外,还将GA的计算复杂度与基于DE和DL的方法进行了复杂的比较,突出了它们不同的效率和性能特点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/cd604337510e/41598_2024_84324_Fig32_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/b286e075705e/41598_2024_84324_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/08ab71a8c46c/41598_2024_84324_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/85ec6851bbca/41598_2024_84324_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/9385f1d559f8/41598_2024_84324_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/ae9c602fba06/41598_2024_84324_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/709db480bf82/41598_2024_84324_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/5d756258cc44/41598_2024_84324_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/52d9be5783c0/41598_2024_84324_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/3ae68a9c2f81/41598_2024_84324_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/960f18b8ce2c/41598_2024_84324_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/01e5c7253782/41598_2024_84324_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/a32c751c09f7/41598_2024_84324_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/116510238aef/41598_2024_84324_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/e966b6dbdf20/41598_2024_84324_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/d8e2cd4f1ef9/41598_2024_84324_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/1f8dae38a175/41598_2024_84324_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/e5a65a1568ec/41598_2024_84324_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/6c1631bd4b0f/41598_2024_84324_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/0bab47322d02/41598_2024_84324_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/af6851e33234/41598_2024_84324_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7a7411aa5506/41598_2024_84324_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/4debb935b82a/41598_2024_84324_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/6fbb64d7a86c/41598_2024_84324_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/51ca4b263a2e/41598_2024_84324_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/ef4d42b288da/41598_2024_84324_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/be245e3068d7/41598_2024_84324_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7528ffe6ea10/41598_2024_84324_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/c47114352d58/41598_2024_84324_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/3422a4a36118/41598_2024_84324_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/d1b5d4012c9e/41598_2024_84324_Fig26_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/486ae0ecc872/41598_2024_84324_Fig27_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7e50539bc9f5/41598_2024_84324_Fig28_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/09a9f0983001/41598_2024_84324_Fig29_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/8dfacaffc5a0/41598_2024_84324_Fig30_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/c21cf0a2d2a2/41598_2024_84324_Fig31_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/cd604337510e/41598_2024_84324_Fig32_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/b286e075705e/41598_2024_84324_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/08ab71a8c46c/41598_2024_84324_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/85ec6851bbca/41598_2024_84324_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/9385f1d559f8/41598_2024_84324_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/ae9c602fba06/41598_2024_84324_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/709db480bf82/41598_2024_84324_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/5d756258cc44/41598_2024_84324_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/52d9be5783c0/41598_2024_84324_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/3ae68a9c2f81/41598_2024_84324_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/960f18b8ce2c/41598_2024_84324_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/01e5c7253782/41598_2024_84324_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/a32c751c09f7/41598_2024_84324_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/116510238aef/41598_2024_84324_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/e966b6dbdf20/41598_2024_84324_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/d8e2cd4f1ef9/41598_2024_84324_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/1f8dae38a175/41598_2024_84324_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/e5a65a1568ec/41598_2024_84324_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/6c1631bd4b0f/41598_2024_84324_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/0bab47322d02/41598_2024_84324_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/af6851e33234/41598_2024_84324_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7a7411aa5506/41598_2024_84324_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/4debb935b82a/41598_2024_84324_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/6fbb64d7a86c/41598_2024_84324_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/51ca4b263a2e/41598_2024_84324_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/ef4d42b288da/41598_2024_84324_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/be245e3068d7/41598_2024_84324_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7528ffe6ea10/41598_2024_84324_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/c47114352d58/41598_2024_84324_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/3422a4a36118/41598_2024_84324_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/d1b5d4012c9e/41598_2024_84324_Fig26_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/486ae0ecc872/41598_2024_84324_Fig27_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/7e50539bc9f5/41598_2024_84324_Fig28_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/09a9f0983001/41598_2024_84324_Fig29_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/8dfacaffc5a0/41598_2024_84324_Fig30_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/c21cf0a2d2a2/41598_2024_84324_Fig31_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ce/11696284/cd604337510e/41598_2024_84324_Fig32_HTML.jpg

相似文献

1
Key frame extraction algorithm for surveillance videos using an evolutionary approach.基于进化方法的监控视频关键帧提取算法
Sci Rep. 2025 Jan 2;15(1):536. doi: 10.1038/s41598-024-84324-0.
2
Feature fusion and clustering for key frame extraction.特征融合与聚类用于关键帧提取。
Math Biosci Eng. 2021 Oct 27;18(6):9294-9311. doi: 10.3934/mbe.2021457.
3
An effective Key Frame Extraction technique based on Feature Fusion and Fuzzy-C means clustering with Artificial Hummingbird.一种基于特征融合、模糊C均值聚类和人工蜂鸟的有效关键帧提取技术。
Sci Rep. 2024 Nov 4;14(1):26651. doi: 10.1038/s41598-024-75923-y.
4
RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.RPCA-KFE:基于鲁棒主成分分析的视频关键帧提取。
IEEE Trans Image Process. 2015 Nov;24(11):3742-53. doi: 10.1109/TIP.2015.2445572. Epub 2015 Jun 15.
5
Key Frame Extraction in the Summary Space.关键帧提取在摘要空间中。
IEEE Trans Cybern. 2018 Jun;48(6):1923-1934. doi: 10.1109/TCYB.2017.2718579. Epub 2017 Jul 4.
6
News Video Summarization Combining SURF and Color Histogram Features.结合加速鲁棒特征和颜色直方图特征的新闻视频摘要
Entropy (Basel). 2021 Jul 30;23(8):982. doi: 10.3390/e23080982.
7
Domain independent redundancy elimination based on flow vectors for static video summarization.基于流向量的领域无关冗余消除用于静态视频摘要
Heliyon. 2019 Nov 1;5(10):e02699. doi: 10.1016/j.heliyon.2019.e02699. eCollection 2019 Oct.
8
Self-Supervised Learning to Detect Key Frames in Videos.自监督学习在视频关键帧检测中的应用
Sensors (Basel). 2020 Dec 4;20(23):6941. doi: 10.3390/s20236941.
9
Video Summarization Based on Mutual Information and Entropy Sliding Window Method.基于互信息和熵滑动窗口法的视频摘要
Entropy (Basel). 2020 Nov 12;22(11):1285. doi: 10.3390/e22111285.
10
Scalable gastroscopic video summarization via similar-inhibition dictionary selection.通过相似抑制字典选择实现可扩展的胃镜视频摘要
Artif Intell Med. 2016 Jan;66:1-13. doi: 10.1016/j.artmed.2015.08.006. Epub 2015 Aug 18.

本文引用的文献

1
VGSG: Vision-Guided Semantic-Group Network for Text-Based Person Search.VGSG:用于基于文本的行人搜索的视觉引导语义组网络。
IEEE Trans Image Process. 2024;33:163-176. doi: 10.1109/TIP.2023.3337653. Epub 2023 Dec 8.
2
GQE-Net: A Graph-Based Quality Enhancement Network for Point Cloud Color Attribute.GQE-Net:一种用于点云颜色属性的基于图的质量增强网络。
IEEE Trans Image Process. 2023;32:6303-6317. doi: 10.1109/TIP.2023.3330086. Epub 2023 Nov 20.
3
RayMVSNet++: Learning Ray-Based 1D Implicit Fields for Accurate Multi-View Stereo.
RayMVSNet++:学习基于光线的一维隐式场以实现精确的多视图立体视觉。
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13666-13682. doi: 10.1109/TPAMI.2023.3296163. Epub 2023 Oct 3.
4
DSNet: A Flexible Detect-to-Summarize Network for Video Summarization.DSNet:一种用于视频摘要的灵活检测到摘要网络。
IEEE Trans Image Process. 2021;30:948-962. doi: 10.1109/TIP.2020.3039886. Epub 2020 Dec 8.
5
Key Frame Extraction in the Summary Space.关键帧提取在摘要空间中。
IEEE Trans Cybern. 2018 Jun;48(6):1923-1934. doi: 10.1109/TCYB.2017.2718579. Epub 2017 Jul 4.