• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于特征融合、模糊C均值聚类和人工蜂鸟的有效关键帧提取技术。

An effective Key Frame Extraction technique based on Feature Fusion and Fuzzy-C means clustering with Artificial Hummingbird.

作者信息

Kaur Sumandeep, Kaur Lakhwinder, Lal Madan

机构信息

Department of Computer Science and Engineering, Punjabi University, Patiala, 147001, India.

出版信息

Sci Rep. 2024 Nov 4;14(1):26651. doi: 10.1038/s41598-024-75923-y.

DOI:10.1038/s41598-024-75923-y
PMID:39496675
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11535061/
Abstract

Key frame extraction is very important in video summarization and content-based video analysis to address the problem of data redundancy in a video. Key frame extraction enables quick navigation and expert video arrangement in many applications. The visually impaired can benefit from the use of key frame extraction for rapid object recognition and tracking. Most key frame extraction techniques consider only a single visual feature instead of multiple features or full pictorial information of the video. This study proposes a key frame extraction method from a video that (i) first removes insignificant frames by pre-processing, (ii) second, four visual and structural feature differences among the consecutive frames are extracted and aggregated to identify informative frames, (iii) third, to cluster the obtained frames, a hybrid FCM-AHA method is proposed by combining Fuzzy C-means(FCM) with artificial hummingbird optimization algorithm (AHA) to circumvent the local minima trapping problem of FCM, and finally, from each cluster, the two frames having greatest Euclidean distance from all the other frames within a cluster is selected as key frames to remove redundant frames. Experimental results on the Open video and YouTube datasets show that the suggested method outperforms state-of-the-art methods both in terms of subjective qualitative analysis and objective quantitative evaluation, e.g., Precision, Recall, and F-score. Further, results are also taken on real video to demonstrate its applicability in real-life applications.

摘要

关键帧提取在视频摘要和基于内容的视频分析中非常重要,以解决视频中的数据冗余问题。关键帧提取在许多应用中实现了快速导航和专业的视频编排。视障人士可以从使用关键帧提取进行快速目标识别和跟踪中受益。大多数关键帧提取技术只考虑单一视觉特征,而不是视频的多个特征或完整图像信息。本研究提出了一种从视频中提取关键帧的方法,该方法:(i)首先通过预处理去除无关紧要的帧;(ii)其次,提取并汇总连续帧之间的四个视觉和结构特征差异,以识别信息丰富的帧;(iii)第三,为了对获得的帧进行聚类,提出了一种将模糊C均值(FCM)与人工蜂鸟优化算法(AHA)相结合的混合FCM-AHA方法,以规避FCM的局部极小值陷阱问题,最后,从每个聚类中,选择与聚类内所有其他帧欧氏距离最大的两帧作为关键帧,以去除冗余帧。在开放视频和YouTube数据集上的实验结果表明,该方法在主观定性分析和客观定量评估(如精度、召回率和F分数)方面均优于现有方法。此外,还对真实视频进行了测试,以证明其在实际应用中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/8b5a0d6078c7/41598_2024_75923_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/d44d653fc173/41598_2024_75923_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/6f02cb960f34/41598_2024_75923_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/a5b7e31fd8f7/41598_2024_75923_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/9a3cdc145131/41598_2024_75923_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/ef79e401a0a0/41598_2024_75923_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/dcd1b7d53f2e/41598_2024_75923_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/cca7f940f055/41598_2024_75923_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/587392e72d93/41598_2024_75923_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/f5adc2bc9e08/41598_2024_75923_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/2c093fe05373/41598_2024_75923_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/2078fc928df3/41598_2024_75923_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/224dd5f42647/41598_2024_75923_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/8b5a0d6078c7/41598_2024_75923_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/d44d653fc173/41598_2024_75923_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/6f02cb960f34/41598_2024_75923_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/a5b7e31fd8f7/41598_2024_75923_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/9a3cdc145131/41598_2024_75923_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/ef79e401a0a0/41598_2024_75923_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/dcd1b7d53f2e/41598_2024_75923_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/cca7f940f055/41598_2024_75923_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/587392e72d93/41598_2024_75923_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/f5adc2bc9e08/41598_2024_75923_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/2c093fe05373/41598_2024_75923_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/2078fc928df3/41598_2024_75923_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/224dd5f42647/41598_2024_75923_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4652/11535061/8b5a0d6078c7/41598_2024_75923_Fig12_HTML.jpg

相似文献

1
An effective Key Frame Extraction technique based on Feature Fusion and Fuzzy-C means clustering with Artificial Hummingbird.一种基于特征融合、模糊C均值聚类和人工蜂鸟的有效关键帧提取技术。
Sci Rep. 2024 Nov 4;14(1):26651. doi: 10.1038/s41598-024-75923-y.
2
Feature fusion and clustering for key frame extraction.特征融合与聚类用于关键帧提取。
Math Biosci Eng. 2021 Oct 27;18(6):9294-9311. doi: 10.3934/mbe.2021457.
3
Video Summarization Based on Mutual Information and Entropy Sliding Window Method.基于互信息和熵滑动窗口法的视频摘要
Entropy (Basel). 2020 Nov 12;22(11):1285. doi: 10.3390/e22111285.
4
News Video Summarization Combining SURF and Color Histogram Features.结合加速鲁棒特征和颜色直方图特征的新闻视频摘要
Entropy (Basel). 2021 Jul 30;23(8):982. doi: 10.3390/e23080982.
5
Intelligent Sports Video Classification Based on Deep Neural Network (DNN) Algorithm and Transfer Learning.基于深度神经网络(DNN)算法和迁移学习的智能体育视频分类。
Comput Intell Neurosci. 2021 Nov 24;2021:1825273. doi: 10.1155/2021/1825273. eCollection 2021.
6
Heterogeneity image patch index and its application to consumer video summarization.异质图像块索引及其在消费级视频摘要中的应用。
IEEE Trans Image Process. 2014 Jun;23(6):2704-18. doi: 10.1109/TIP.2014.2320814.
7
RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.RPCA-KFE:基于鲁棒主成分分析的视频关键帧提取。
IEEE Trans Image Process. 2015 Nov;24(11):3742-53. doi: 10.1109/TIP.2015.2445572. Epub 2015 Jun 15.
8
Key Frame Extraction in the Summary Space.关键帧提取在摘要空间中。
IEEE Trans Cybern. 2018 Jun;48(6):1923-1934. doi: 10.1109/TCYB.2017.2718579. Epub 2017 Jul 4.
9
Domain independent redundancy elimination based on flow vectors for static video summarization.基于流向量的领域无关冗余消除用于静态视频摘要
Heliyon. 2019 Nov 1;5(10):e02699. doi: 10.1016/j.heliyon.2019.e02699. eCollection 2019 Oct.
10
Visual Feature Learning on Video Object and Human Action Detection: A Systematic Review.视频对象与人类动作检测中的视觉特征学习:系统综述
Micromachines (Basel). 2021 Dec 31;13(1):72. doi: 10.3390/mi13010072.

本文引用的文献

1
Video summarization using deep learning techniques: a detailed analysis and investigation.使用深度学习技术的视频摘要:详细分析与研究
Artif Intell Rev. 2023 Mar 15:1-39. doi: 10.1007/s10462-023-10444-0.
2
News Video Summarization Combining SURF and Color Histogram Features.结合加速鲁棒特征和颜色直方图特征的新闻视频摘要
Entropy (Basel). 2021 Jul 30;23(8):982. doi: 10.3390/e23080982.
3
Deep Attentive Video Summarization With Distribution Consistency Learning.基于分布一致性学习的深度注意力视频摘要
IEEE Trans Neural Netw Learn Syst. 2021 Apr;32(4):1765-1775. doi: 10.1109/TNNLS.2020.2991083. Epub 2021 Apr 2.
4
A survey on deep learning in medical image analysis.深度学习在医学图像分析中的应用研究综述。
Med Image Anal. 2017 Dec;42:60-88. doi: 10.1016/j.media.2017.07.005. Epub 2017 Jul 26.