Suppr超能文献

FOVQA:盲视中央凹视频质量评估

FOVQA: Blind Foveated Video Quality Assessment.

作者信息

Jin Yize, Patney Anjul, Webb Richard, Bovik Alan C

出版信息

IEEE Trans Image Process. 2022;31:4571-4584. doi: 10.1109/TIP.2022.3185738. Epub 2022 Jul 12.

Abstract

Previous blind or No Reference (NR) Image / video quality assessment (IQA/VQA) models largely rely on features drawn from natural scene statistics (NSS), but under the assumption that the image statistics are stationary in the spatial domain. Several of these models are quite successful on standard pictures. However, in Virtual Reality (VR) applications, foveated video compression is regaining attention, and the concept of space-variant quality assessment is of interest, given the availability of increasingly high spatial and temporal resolution contents and practical ways of measuring gaze direction. Distortions from foveated video compression increase with increased eccentricity, implying that the natural scene statistics are space-variant. Towards advancing the development of foveated compression / streaming algorithms, we have devised a no-reference (NR) foveated video quality assessment model, called FOVQA, which is based on new models of space-variant natural scene statistics (NSS) and natural video statistics (NVS). Specifically, we deploy a space-variant generalized Gaussian distribution (SV-GGD) model and a space-variant asynchronous generalized Gaussian distribution (SV-AGGD) model of mean subtracted contrast normalized (MSCN) coefficients and products of neighboring MSCN coefficients, respectively. We devise a foveated video quality predictor that extracts radial basis features, and other features that capture perceptually annoying rapid quality fall-offs. We find that FOVQA achieves state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database, as compared with other leading Foveated IQA / VQA models. we have made our implementation of FOVQA available at: https://live.ece.utexas.edu/research/Quality/FOVQA.zip.

摘要

以往的盲法或无参考(NR)图像/视频质量评估(IQA/VQA)模型很大程度上依赖于从自然场景统计(NSS)中提取的特征,但前提是图像统计在空间域中是平稳的。其中一些模型在标准图片上相当成功。然而,在虚拟现实(VR)应用中,注视点视频压缩重新受到关注,鉴于越来越高的空间和时间分辨率内容的可用性以及测量注视方向的实际方法,空间可变质量评估的概念备受关注。注视点视频压缩产生的失真会随着偏心率的增加而增大,这意味着自然场景统计是空间可变的。为了推进注视点压缩/流算法的发展,我们设计了一种无参考(NR)注视点视频质量评估模型,称为FOVQA,它基于空间可变自然场景统计(NSS)和自然视频统计(NVS)的新模型。具体来说,我们分别部署了均值相减对比度归一化(MSCN)系数的空间可变广义高斯分布(SV-GGD)模型和相邻MSCN系数乘积的空间可变异步广义高斯分布(SV-AGGD)模型。我们设计了一种注视点视频质量预测器,它提取径向基特征以及其他能够捕捉感知上令人讨厌的快速质量下降的特征。我们发现,与其他领先的注视点IQA/VQA模型相比,FOVQA在新的2D LIVE-FBT-FCVR数据库上实现了最优(SOTA)性能。我们已将FOVQA的实现发布在:https://live.ece.utexas.edu/research/Quality/FOVQA.zip

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验