Suppr超能文献

基于深度神经网络的端到端盲图像质量评估。

End-to-End Blind Image Quality Assessment Using Deep Neural Networks.

出版信息

IEEE Trans Image Process. 2018 Mar;27(3):1202-1213. doi: 10.1109/TIP.2017.2774045.

Abstract

We propose a multi-task end-to-end optimized deep neural network (MEON) for blind image quality assessment (BIQA). MEON consists of two sub-networks-a distortion identification network and a quality prediction network-sharing the early layers. Unlike traditional methods used for training multi-task networks, our training process is performed in two steps. In the first step, we train a distortion type identification sub-network, for which large-scale training samples are readily available. In the second step, starting from the pre-trained early layers and the outputs of the first sub-network, we train a quality prediction sub-network using a variant of the stochastic gradient descent method. Different from most deep neural networks, we choose biologically inspired generalized divisive normalization (GDN) instead of rectified linear unit as the activation function. We empirically demonstrate that GDN is effective at reducing model parameters/layers while achieving similar quality prediction performance. With modest model complexity, the proposed MEON index achieves state-of-the-art performance on four publicly available benchmarks. Moreover, we demonstrate the strong competitiveness of MEON against state-of-the-art BIQA models using the group maximum differentiation competition methodology.

摘要

我们提出了一种用于盲图像质量评估 (BIQA) 的多任务端到端优化深度神经网络 (MEON)。MEON 由两个子网络组成——失真识别网络和质量预测网络,它们共享早期层。与用于训练多任务网络的传统方法不同,我们的训练过程分两步进行。在第一步中,我们训练一个失真类型识别子网络,对于该子网络,有大量的训练样本。在第二步中,从预训练的早期层和第一个子网络的输出开始,我们使用随机梯度下降法的变体训练质量预测子网络。与大多数深度神经网络不同,我们选择生物启发的广义除法归一化 (GDN) 而不是修正线性单元作为激活函数。我们通过实验证明,GDN 可以在减少模型参数/层的同时实现类似的质量预测性能。在适度的模型复杂度下,所提出的 MEON 指标在四个公开可用的基准上实现了最先进的性能。此外,我们还使用组最大差异化竞争方法证明了 MEON 对最先进的 BIQA 模型的强大竞争力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验