Suppr超能文献

深度神经网络在无参考和全参考图像质量评估中的应用。

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment.

出版信息

IEEE Trans Image Process. 2018 Jan;27(1):206-219. doi: 10.1109/TIP.2017.2760518. Epub 2017 Oct 10.

Abstract

We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

摘要

我们提出了一种基于深度神经网络的图像质量评估(IQA)方法。该网络经过端到端训练,包括十个卷积层和五个池化层用于特征提取,以及两个全连接层用于回归,这使得它比相关的 IQA 模型要深得多。该架构的独特之处在于:1)经过轻微调整,它可以用于无参考(NR)和全参考(FR)IQA 设置;2)它允许在统一框架中联合学习局部质量和局部权重,即局部质量对全局质量估计的相对重要性。我们的方法完全是数据驱动的,不依赖于手工制作的特征或其他类型的关于人类视觉系统或图像统计的先验领域知识。我们在 LIVE、CISQ 和 TID2013 数据库以及 LIVE 野外图像质量挑战数据库上评估了所提出的方法,并展示了优于最先进的 NR 和 FR IQA 方法的性能。最后,跨数据库评估表明在不同数据库之间具有高度的泛化能力,表明所学习的特征具有很高的鲁棒性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验