Suppr超能文献

基于局部到全局特征聚合的盲景深 S3D 图像质量评价。

Blind Deep S3D Image Quality Evaluation via Local to Global Feature Aggregation.

出版信息

IEEE Trans Image Process. 2017 Oct;26(10):4923-4936. doi: 10.1109/TIP.2017.2725584. Epub 2017 Jul 11.

Abstract

Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having ~ 91% correlation in terms of MOS.

摘要

先前,无参考(NR)立体 3D(S3D)图像质量评估(IQA)算法仅限于基于对未充分揭示的人类视觉系统或自然场景统计的理解,提取可靠的手工制作特征。此外,与全参考(FR)S3D IQA 指标相比,使用提取的特征很难实现具有竞争力的质量评分预测,这些特征未针对人类意见进行优化。为了应对传统方法的这一局限性,我们提出了一种新的基于局部到全局特征聚合的 NR S3D IQA 的深度学习方案。通过两步回归,以监督方式训练深度卷积神经网络(CNN)模型。首先,为了克服训练数据不足的问题,建立了基于局部补丁的 CNN 模型,并使用 FR S3D IQA 指标来近似参考地面真实值,以训练 CNN。通过在深度结构中插入聚合层,将自动提取的局部抽象聚合为全局特征。然后,使用监督全局标记(即主观平均意见得分(MOS))迭代更新局部训练的模型参数。特别是,所提出的深度 NR S3D 图像质量评估器不估计一对 S3D 图像的深度。与先前的 NR S3D IQA 算法相比,所提出的方法预测的 S3D 图像质量得分有显著提高。事实上,所提出的方法的准确性与 FR S3D IQA 指标具有竞争力,在 MOS 方面具有约 91%的相关性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验