IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2198-2215. doi: 10.1109/TPAMI.2020.3028509. Epub 2022 Mar 4.
For 360° video, the existing visual quality assessment (VQA) approaches are designed based on either the whole frames or the cropped patches, ignoring the fact that subjects can only access viewports. When watching 360° video, subjects select viewports through head movement (HM) and then fixate on attractive regions within the viewports through eye movement (EM). Therefore, this paper proposes a two-staged multi-task approach for viewport-based VQA on 360° video. Specifically, we first establish a large-scale VQA dataset of 360° video, called VQA-ODV, which collects the subjective quality scores and the HM and EM data on 600 video sequences. By mining our dataset, we find that the subjective quality of 360° video is related to camera motion, viewport positions and saliency within viewports. Accordingly, we propose a viewport-based convolutional neural network (V-CNN) approach for VQA on 360° video, which has a novel multi-task architecture composed of a viewport proposal network (VP-net) and viewport quality network (VQ-net). The VP-net handles the auxiliary tasks of camera motion detection and viewport proposal, while the VQ-net accomplishes the auxiliary task of viewport saliency prediction and the main task of VQA. The experiments validate that our V-CNN approach significantly advances state-of-the-art VQA performance on 360° video and it is also effective in the three auxiliary tasks.
对于 360°视频,现有的视觉质量评估 (VQA) 方法是基于整帧或裁剪的补丁设计的,忽略了这样一个事实,即主体只能访问视口。观看 360°视频时,主体通过头部运动 (HM) 选择视口,然后通过眼动 (EM) 固定在视口内的吸引人的区域。因此,本文提出了一种基于视口的 360°视频 VQA 的两阶段多任务方法。具体来说,我们首先建立了一个大型的 360°视频 VQA 数据集,称为 VQA-ODV,该数据集收集了 600 个视频序列的主观质量评分以及 HM 和 EM 数据。通过挖掘我们的数据集,我们发现 360°视频的主观质量与相机运动、视口位置和视口内的显著性有关。因此,我们提出了一种基于视口的卷积神经网络 (V-CNN) 方法,用于 360°视频的 VQA,该方法具有新颖的多任务架构,由视口建议网络 (VP-net) 和视口质量网络 (VQ-net) 组成。VP-net 处理相机运动检测和视口建议的辅助任务,而 VQ-net 完成视口显著性预测的辅助任务和 VQA 的主要任务。实验验证了我们的 V-CNN 方法显著提高了 360°视频的最新 VQA 性能,并且在三个辅助任务中也很有效。