Suppr超能文献

NIQSV+:一种无参考合成视图质量评估指标。

NIQSV+: A No-Reference Synthesized View Quality Assessment Metric.

出版信息

IEEE Trans Image Process. 2018 Apr;27(4):1652-1664. doi: 10.1109/TIP.2017.2781420.

Abstract

Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test.

摘要

受益于多视角视频加深度和基于深度图像的渲染技术,只需要捕获、压缩和传输真实 3D 场景的有限视角。然而,合成视图的质量评估非常具有挑战性,因为由视图合成和深度图压缩不可避免地产生了一些新类型的失真,这些失真与纹理编码错误本质上不同,并且相应的原始视图(参考视图)通常不可用。因此,全参考质量指标不能用于合成视图。在本文中,我们提出了一种新的用于 3D 合成视图的无参考图像质量评估方法(称为 NIQSV+)。这种盲度量可以通过测量典型的合成失真(模糊区域、黑洞和拉伸)来评估合成视图的质量,而无需参考图像或深度图。为了评估所提出方法的性能,我们将其与四个全参考 3D(专门用于合成视图的)指标、五个全参考 2D 指标和三个无参考 2D 指标进行了比较。就其与主观分数的相关性而言,我们的实验结果表明,所提出的无参考度量方法接近最先进的全参考和无参考 3D 度量方法的最佳性能;并且明显优于广泛使用的无参考和全参考 2D 度量方法。就其对人类排名的逼近而言,所提出的度量方法在实验测试中表现出最佳性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验