• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

以事件为中心的视频的视频摘要

Video summarization for event-centric videos.

作者信息

Li Qingwen, Chen Jianni, Xie Qiqin, Han Xiao

机构信息

Shanghai University of Finance and Economics, 777 Guoding Rd, Shanghai, 200433, China.

Shanghai University, 99 Shangda Rd, Shanghai, 200444, China.

出版信息

Neural Netw. 2023 Apr;161:359-370. doi: 10.1016/j.neunet.2023.01.047. Epub 2023 Feb 3.

DOI:10.1016/j.neunet.2023.01.047
PMID:36780859
Abstract

Video summarization has long been used to ease video browsing and plays a more crucial role with the explosion of online videos. In the context of event-centric videos, we aim to extract the corresponding clips of more important events in the video. To tackle the dilemma between the detection precision and the clip completeness faced by previous methods, we present an efficient Boundary-Aware framework for Summary clip Extraction (BASE) to extract summary clips with more precise boundaries while maintaining their completeness. Specifically, we propose a new distance-based importance signal to reflect the progress information in each video. The signal can not only help us to detect boundaries with higher precision, but also make it possible to preserve the clip completeness. For the feature presentation part, we also explore new information types to facilitate video summarization. Our approach outperforms current state-of-the-art video summarization models in terms of more precise clip boundaries and more complete summary clips. Note that we even yield comparable results to manual annotations.

摘要

视频摘要技术长期以来一直用于方便视频浏览,并且随着在线视频的爆炸式增长发挥着更为关键的作用。在以事件为中心的视频背景下,我们旨在提取视频中更重要事件的相应片段。为了解决先前方法所面临的检测精度和片段完整性之间的困境,我们提出了一种高效的用于摘要片段提取的边界感知框架(BASE),以提取具有更精确边界同时保持其完整性的摘要片段。具体而言,我们提出了一种基于新距离的重要性信号来反映每个视频中的进度信息。该信号不仅可以帮助我们以更高的精度检测边界,还能够保留片段的完整性。对于特征呈现部分,我们还探索了新的信息类型以促进视频摘要。我们的方法在更精确的片段边界和更完整的摘要片段方面优于当前最先进的视频摘要模型。请注意,我们甚至能产生与人工标注相当的结果。

相似文献

1
Video summarization for event-centric videos.以事件为中心的视频的视频摘要
Neural Netw. 2023 Apr;161:359-370. doi: 10.1016/j.neunet.2023.01.047. Epub 2023 Feb 3.
2
Diversity-Aware Multi-Video Summarization.多模态多样性感知视频摘要。
IEEE Trans Image Process. 2017 Oct;26(10):4712-4724. doi: 10.1109/TIP.2017.2708902. Epub 2017 May 26.
3
Hysteroscopy video summarization and browsing by estimating the physician's attention on video segments.通过估计医生对视频片段的注意力来对宫腔镜检查视频进行总结和浏览。
Med Image Anal. 2012 Jan;16(1):160-76. doi: 10.1016/j.media.2011.06.008. Epub 2011 Aug 24.
4
Interp-SUM: Unsupervised Video Summarization with Piecewise Linear Interpolation.Interp-SUM:基于分段线性插值的无监督视频摘要。
Sensors (Basel). 2021 Jul 2;21(13):4562. doi: 10.3390/s21134562.
5
Video Joint Modelling Based on Hierarchical Transformer for Co-Summarization.基于分层Transformer的视频联合建模用于协同摘要
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3904-3917. doi: 10.1109/TPAMI.2022.3186506. Epub 2023 Feb 3.
6
Unsupervised Video Summarization Based on Deep Reinforcement Learning with Interpolation.基于深度强化学习与插值的无监督视频摘要。
Sensors (Basel). 2023 Mar 23;23(7):3384. doi: 10.3390/s23073384.
7
In Defense of Clip-Based Video Relation Detection.
IEEE Trans Image Process. 2024;33:2759-2769. doi: 10.1109/TIP.2024.3379935. Epub 2024 Apr 9.
8
Heterogeneity image patch index and its application to consumer video summarization.异质图像块索引及其在消费级视频摘要中的应用。
IEEE Trans Image Process. 2014 Jun;23(6):2704-18. doi: 10.1109/TIP.2014.2320814.
9
From video summarization to real time video summarization in smart cities and beyond: A survey.
Front Big Data. 2023 Jan 9;5:1106776. doi: 10.3389/fdata.2022.1106776. eCollection 2022.
10
Surgical gesture classification from video and kinematic data.基于视频和运动学数据的外科手势分类。
Med Image Anal. 2013 Oct;17(7):732-45. doi: 10.1016/j.media.2013.04.007. Epub 2013 Apr 28.