• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于残差提示注意力的多阶段事件视频去模糊网络。

Multi-Stage Network for Event-Based Video Deblurring with Residual Hint Attention.

机构信息

School of Computing, Gachon University, Seongnam 13120, Republic of Korea.

出版信息

Sensors (Basel). 2023 Mar 7;23(6):2880. doi: 10.3390/s23062880.

DOI:10.3390/s23062880
PMID:36991602
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10056412/
Abstract

Video deblurring aims at removing the motion blur caused by the movement of objects or camera shake. Traditional video deblurring methods have mainly focused on frame-based deblurring, which takes only blurry frames as the input to produce sharp frames. However, frame-based deblurring has shown poor picture quality in challenging cases of video restoration where severely blurred frames are provided as the input. To overcome this issue, recent studies have begun to explore the event-based approach, which uses the event sequence captured by an event camera for motion deblurring. Event cameras have several advantages compared to conventional frame cameras. Among these advantages, event cameras have a low latency in imaging data acquisition (0.001 ms for event cameras vs. 10 ms for frame cameras). Hence, event data can be acquired at a high acquisition rate (up to one microsecond). This means that the event sequence contains more accurate motion information than video frames. Additionally, event data can be acquired with less motion blur. Due to these advantages, the use of event data is highly beneficial for achieving improvements in the quality of deblurred frames. Accordingly, the results of event-based video deblurring are superior to those of frame-based deblurring methods, even for severely blurred video frames. However, the direct use of event data can often generate visual artifacts in the final output frame (e.g., image noise and incorrect textures), because event data intrinsically contain insufficient textures and event noise. To tackle this issue in event-based deblurring, we propose a two-stage coarse-refinement network by adding a frame-based refinement stage that utilizes all the available frames with more abundant textures to further improve the picture quality of the first-stage coarse output. Specifically, a coarse intermediate frame is estimated by performing event-based video deblurring in the first-stage network. A residual hint attention (RHA) module is also proposed to extract useful attention information from the coarse output and all the available frames. This module connects the first and second stages and effectively guides the frame-based refinement of the coarse output. The final deblurred frame is then obtained by refining the coarse output using the residual hint attention and all the available frame information in the second-stage network. We validated the deblurring performance of the proposed network on the GoPro synthetic dataset (33 videos and 4702 frames) and the HQF real dataset (11 videos and 2212 frames). Compared to the state-of-the-art method (D2Net), we achieved a performance improvement of 1 dB in PSNR and 0.05 in SSIM on the GoPro dataset, and an improvement of 1.7 dB in PSNR and 0.03 in SSIM on the HQF dataset.

摘要

视频去模糊旨在去除由于物体运动或相机抖动而造成的运动模糊。传统的视频去模糊方法主要集中在基于帧的去模糊上,它只将模糊的帧作为输入来生成清晰的帧。然而,基于帧的去模糊在输入严重模糊的帧的视频恢复的挑战性情况下显示出较差的图像质量。为了克服这个问题,最近的研究开始探索基于事件的方法,该方法使用事件相机捕获的事件序列进行运动去模糊。与传统的帧相机相比,事件相机具有几个优势。其中,事件相机在成像数据采集方面具有较低的延迟(事件相机为 0.001 毫秒,而帧相机为 10 毫秒)。因此,可以以较高的采集率(高达微秒)采集事件数据。这意味着事件序列包含比视频帧更准确的运动信息。此外,事件数据可以以较少的运动模糊采集。由于这些优势,使用事件数据对于提高去模糊帧的质量非常有益。因此,基于事件的视频去模糊的结果优于基于帧的去模糊方法,即使对于严重模糊的视频帧也是如此。然而,直接使用事件数据通常会在最终输出帧中产生视觉伪影(例如,图像噪声和不正确的纹理),因为事件数据本质上包含不足的纹理和事件噪声。为了解决这个问题,我们提出了一种两阶段的粗细化网络,通过添加一个基于帧的细化阶段来利用更多丰富纹理的所有可用帧,进一步提高第一阶段粗输出的图像质量。具体来说,通过在第一阶段网络中进行基于事件的视频去模糊来估计粗中间帧。还提出了一种残余提示注意(RHA)模块,从粗输出和所有可用帧中提取有用的注意信息。该模块连接第一和第二阶段,并有效地指导粗输出的基于帧的细化。然后,通过在第二阶段网络中使用残余提示注意和所有可用帧信息来细化粗输出,获得最终的去模糊帧。我们在 GoPro 合成数据集(33 个视频和 4702 帧)和 HQF 真实数据集(11 个视频和 2212 帧)上验证了所提出网络的去模糊性能。与最先进的方法(D2Net)相比,我们在 GoPro 数据集上的 PSNR 提高了 1dB,SSIM 提高了 0.05,在 HQF 数据集上的 PSNR 提高了 1.7dB,SSIM 提高了 0.03。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/8f4da92f7640/sensors-23-02880-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/f84498beb0ac/sensors-23-02880-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/caade86413f5/sensors-23-02880-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/ce7b6cf84aaf/sensors-23-02880-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/1b6dd2246ecd/sensors-23-02880-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/31a7a31351d8/sensors-23-02880-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/0014b11f6df1/sensors-23-02880-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/8f4da92f7640/sensors-23-02880-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/f84498beb0ac/sensors-23-02880-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/caade86413f5/sensors-23-02880-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/ce7b6cf84aaf/sensors-23-02880-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/1b6dd2246ecd/sensors-23-02880-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/31a7a31351d8/sensors-23-02880-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/0014b11f6df1/sensors-23-02880-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/769c/10056412/8f4da92f7640/sensors-23-02880-g007.jpg

相似文献

1
Multi-Stage Network for Event-Based Video Deblurring with Residual Hint Attention.基于残差提示注意力的多阶段事件视频去模糊网络。
Sensors (Basel). 2023 Mar 7;23(6):2880. doi: 10.3390/s23062880.
2
Image Deblurring Using Multi-Stream Bottom-Top-Bottom Attention Network and Global Information-Based Fusion and Reconstruction Network.基于多流底层-顶层-底层注意力网络和全局信息融合与重建网络的图像去模糊。
Sensors (Basel). 2020 Jul 3;20(13):3724. doi: 10.3390/s20133724.
3
Video deblurring algorithm using accurate blur kernel estimation and residual deconvolution based on a blurred-unblurred frame pair.基于模糊-清晰帧对的精确模糊核估计和残差反卷积的视频去模糊算法。
IEEE Trans Image Process. 2013 Mar;22(3):926-40. doi: 10.1109/TIP.2012.2222898. Epub 2012 Oct 5.
4
Combining Motion Compensation with Spatiotemporal Constraint for Video Deblurring.运动补偿与时空约束相结合的视频去模糊。
Sensors (Basel). 2018 Jun 1;18(6):1774. doi: 10.3390/s18061774.
5
Stereoscopic video deblurring transformer.立体视频去模糊变压器。
Sci Rep. 2024 Jun 21;14(1):14342. doi: 10.1038/s41598-024-63860-9.
6
Event-Assisted Blurriness Representation Learning for Blurry Image Unfolding.用于模糊图像展开的事件辅助模糊表示学习
IEEE Trans Image Process. 2024;33:5824-5836. doi: 10.1109/TIP.2024.3468023. Epub 2024 Oct 15.
7
High Frame Rate Video Reconstruction Based on an Event Camera.基于事件相机的高帧率视频重建
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2519-2533. doi: 10.1109/TPAMI.2020.3036667. Epub 2022 Apr 1.
8
Removing motion blur with space-time processing.利用时空处理去除运动模糊。
IEEE Trans Image Process. 2011 Oct;20(10):2990-3000. doi: 10.1109/TIP.2011.2131666. Epub 2011 Mar 24.
9
SuperFast: 200× Video Frame Interpolation via Event Camera.超快速:基于事件相机的 200× 视频帧插补。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7764-7780. doi: 10.1109/TPAMI.2022.3224051. Epub 2023 May 5.
10
Event-Enhanced Snapshot Mosaic Hyperspectral Frame Deblurring.事件增强快照拼接高光谱帧去模糊
IEEE Trans Pattern Anal Mach Intell. 2025 Jan;47(1):206-223. doi: 10.1109/TPAMI.2024.3465455. Epub 2024 Dec 4.

本文引用的文献

1
Multi-Scale Attention-Guided Non-Local Network for HDR Image Reconstruction.用于HDR图像重建的多尺度注意力引导非局部网络
Sensors (Basel). 2022 Sep 17;22(18):7044. doi: 10.3390/s22187044.
2
SlimDeblurGAN-Based Motion Deblurring and Marker Detection for Autonomous Drone Landing.基于 SlimDeblurGAN 的运动去模糊和标记检测的自主无人机着陆。
Sensors (Basel). 2020 Jul 14;20(14):3918. doi: 10.3390/s20143918.
3
Global and Local Attention-Based Free-Form Image Inpainting.基于全局和局部注意力的自由形态图像修复。
Sensors (Basel). 2020 Jun 4;20(11):3204. doi: 10.3390/s20113204.
4
High Speed and High Dynamic Range Video with an Event Camera.基于事件相机的高速高动态范围视频
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):1964-1980. doi: 10.1109/TPAMI.2019.2963386. Epub 2021 May 11.
5
Combining Motion Compensation with Spatiotemporal Constraint for Video Deblurring.运动补偿与时空约束相结合的视频去模糊。
Sensors (Basel). 2018 Jun 1;18(6):1774. doi: 10.3390/s18061774.