Suppr超能文献

MIMO-Uformer:一种用于车辆监控场景的基于Transformer的图像去模糊网络。

MIMO-Uformer: A Transformer-Based Image Deblurring Network for Vehicle Surveillance Scenarios.

作者信息

Zhang Jian, Cheng Baoping, Zhang Tengying, Zhao Yongsheng, Fu Tao, Wu Zijian, Tao Xiaoming

机构信息

China Mobile (Hangzhou) Information Technology Co., Ltd., Hangzhou 311100, China.

Department of Electronic Engineering, Tsinghua University, Beijing 100084, China.

出版信息

J Imaging. 2024 Oct 31;10(11):274. doi: 10.3390/jimaging10110274.

Abstract

Motion blur is a common problem in the field of surveillance scenarios, and it obstructs the acquisition of valuable information. Thanks to the success of deep learning, a sequence of CNN-based architecture has been designed for image deblurring and has made great progress. As another type of neural network, transformers have exhibited powerful deep representation learning and impressive performance based on high-level vision tasks. Transformer-based networks leverage self-attention to capture the long-range dependencies in the data, yet the computational complexity is quadratic to the spatial resolution, which makes transformers infeasible for the restoration of high-resolution images. In this article, we propose an efficient transformer-based deblurring network, named MIMO-Uformer, for vehicle-surveillance scenarios. The distinct feature of the MIMO-Uformer is that the basic-window-based multi-head self-attention (W-MSA) of the Swin transformer is employed to reduce the computational complexity and then incorporated into a multi-input and multi-output U-shaped network (MIMO-UNet). The performance can benefit from the operation of multi-scale images by MIMO-UNet. However, most deblurring networks are designed for global blur, while local blur is more common under vehicle-surveillance scenarios since the motion blur is primarily caused by local moving vehicles. Based on this observation, we further propose an Intersection over Patch (IoP) factor and a supervised morphological loss to improve the performance based on local blur. Extensive experiments on a public and a self-established dataset are carried out to verify the effectiveness. As a result, the deblurring behavior based on PSNR is improved at least 0.21 dB based on GOPRO and 0.74 dB based on the self-established datasets compared to the existing benchmarks.

摘要

运动模糊是监控场景领域中的一个常见问题,它阻碍了有价值信息的获取。得益于深度学习的成功,一系列基于卷积神经网络(CNN)的架构被设计用于图像去模糊,并取得了巨大进展。作为另一种神经网络类型,Transformer在基于高级视觉任务的深度表示学习方面展现出强大能力和令人印象深刻的性能。基于Transformer的网络利用自注意力机制来捕捉数据中的长距离依赖关系,但其计算复杂度与空间分辨率呈二次关系,这使得Transformer在高分辨率图像恢复方面不可行。在本文中,我们提出了一种用于车辆监控场景的基于Transformer的高效去模糊网络,名为MIMO-Uformer。MIMO-Uformer的独特之处在于采用了Swin Transformer的基于基本窗口的多头自注意力机制(W-MSA)来降低计算复杂度,然后将其融入多输入多输出U型网络(MIMO-UNet)。性能可以受益于MIMO-UNet对多尺度图像的操作。然而,大多数去模糊网络是为全局模糊设计的,而在车辆监控场景下局部模糊更为常见,因为运动模糊主要是由局部移动的车辆引起的。基于这一观察结果,我们进一步提出了面片交并比(IoP)因子和监督形态学损失,以提高基于局部模糊的性能。在一个公开数据集和一个自建数据集上进行了广泛实验以验证其有效性。结果表明,与现有基准相比,基于峰值信噪比(PSNR)的去模糊性能在GOPRO数据集上至少提高了0.21 dB,在自建数据集上提高了0.74 dB。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0d2/11596006/9bc436c687f9/jimaging-10-00274-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验