Wu Kejun, Liu Qiong, Yap Kim-Hui, Yang You
Opt Lett. 2024 Feb 1;49(3):562-565. doi: 10.1364/OL.505496.
Multifocal multiview (MFMV) is an emerging high-dimensional optical data that allows to record richer scene information but yields huge volumes of data. To unveil its imaging mechanism, we present an angular-focal-spatial representation model, which decomposes high-dimensional MFMV data into angular, spatial, and focal dimensions. To construct a comprehensive MFMV dataset, we leverage representative imaging prototypes, including digital camera imaging, emerging plenoptic refocusing, and synthesized Blender 3D creation. It is believed to be the first-of-its-kind MFMV dataset in multiple acquisition ways. To efficiently compress MFMV data, we propose the first, to our knowledge, MFMV data compression scheme based on angular-focal-spatial representation. It exploits inter-view, inter-stack, and intra-frame predictions to eliminate data redundancy in angular, focal, and spatial dimensions, respectively. Experiments demonstrate the proposed scheme outperforms the standard HEVC and MV-HEVC coding methods. As high as 3.693 dB PSNR gains and 64.22% bitrate savings can be achieved.
多焦点多视图(MFMV)是一种新兴的高维光学数据,它能够记录更丰富的场景信息,但会产生大量数据。为了揭示其成像机制,我们提出了一种角-焦-空间表示模型,该模型将高维MFMV数据分解为角度、空间和焦距维度。为了构建一个全面的MFMV数据集,我们利用了具有代表性的成像原型,包括数码相机成像、新兴的全光重聚焦和合成的Blender 3D创作。据信这是首个采用多种采集方式的MFMV数据集。为了有效压缩MFMV数据,据我们所知,我们提出了首个基于角-焦-空间表示的MFMV数据压缩方案。它利用视图间、堆栈间和帧内预测分别消除角度、焦距和空间维度上的数据冗余。实验表明,所提出的方案优于标准的HEVC和MV-HEVC编码方法。可实现高达3.693 dB的PSNR增益和64.22%的比特率节省。