• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于角焦点空间表示的多焦点多视图成像与数据压缩。

Multifocal multiview imaging and data compression based on angular-focal-spatial representation.

作者信息

Wu Kejun, Liu Qiong, Yap Kim-Hui, Yang You

出版信息

Opt Lett. 2024 Feb 1;49(3):562-565. doi: 10.1364/OL.505496.

DOI:10.1364/OL.505496
PMID:38300059
Abstract

Multifocal multiview (MFMV) is an emerging high-dimensional optical data that allows to record richer scene information but yields huge volumes of data. To unveil its imaging mechanism, we present an angular-focal-spatial representation model, which decomposes high-dimensional MFMV data into angular, spatial, and focal dimensions. To construct a comprehensive MFMV dataset, we leverage representative imaging prototypes, including digital camera imaging, emerging plenoptic refocusing, and synthesized Blender 3D creation. It is believed to be the first-of-its-kind MFMV dataset in multiple acquisition ways. To efficiently compress MFMV data, we propose the first, to our knowledge, MFMV data compression scheme based on angular-focal-spatial representation. It exploits inter-view, inter-stack, and intra-frame predictions to eliminate data redundancy in angular, focal, and spatial dimensions, respectively. Experiments demonstrate the proposed scheme outperforms the standard HEVC and MV-HEVC coding methods. As high as 3.693 dB PSNR gains and 64.22% bitrate savings can be achieved.

摘要

多焦点多视图(MFMV)是一种新兴的高维光学数据,它能够记录更丰富的场景信息,但会产生大量数据。为了揭示其成像机制,我们提出了一种角-焦-空间表示模型,该模型将高维MFMV数据分解为角度、空间和焦距维度。为了构建一个全面的MFMV数据集,我们利用了具有代表性的成像原型,包括数码相机成像、新兴的全光重聚焦和合成的Blender 3D创作。据信这是首个采用多种采集方式的MFMV数据集。为了有效压缩MFMV数据,据我们所知,我们提出了首个基于角-焦-空间表示的MFMV数据压缩方案。它利用视图间、堆栈间和帧内预测分别消除角度、焦距和空间维度上的数据冗余。实验表明,所提出的方案优于标准的HEVC和MV-HEVC编码方法。可实现高达3.693 dB的PSNR增益和64.22%的比特率节省。

相似文献

1
Multifocal multiview imaging and data compression based on angular-focal-spatial representation.基于角焦点空间表示的多焦点多视图成像与数据压缩。
Opt Lett. 2024 Feb 1;49(3):562-565. doi: 10.1364/OL.505496.
2
High dimensional optical data - varifocal multiview imaging, compression and evaluation.高维光学数据——可变焦距多视图成像、压缩与评估。
Opt Express. 2023 Nov 20;31(24):39483-39499. doi: 10.1364/OE.504717.
3
End-to-end varifocal multiview images coding framework from data acquisition end to vision application end.端到端变焦距多视角图像编码框架,从数据采集端到视觉应用端。
Opt Express. 2023 Mar 27;31(7):11659-11679. doi: 10.1364/OE.482141.
4
Highly Efficient Multiview Depth Coding Based on Histogram Projection and Allowable Depth Distortion.基于直方图投影和允许深度失真的高效多视图深度编码
IEEE Trans Image Process. 2021;30:402-417. doi: 10.1109/TIP.2020.3036760. Epub 2020 Nov 23.
5
A Flexible Coding Scheme Based on Block Krylov Subspace Approximation for Light Field Displays with Stacked Multiplicative Layers.一种基于块克里洛夫子空间近似的灵活编码方案,用于具有堆叠乘法层的光场显示
Sensors (Basel). 2021 Jul 4;21(13):4574. doi: 10.3390/s21134574.
6
Shearlet Transform based Light Field Compression Under Low Bitrates.低比特率下基于剪切波变换的光场压缩
IEEE Trans Image Process. 2020 Jan 29. doi: 10.1109/TIP.2020.2969087.
7
Reference View Selection in DIBR-Based Multiview Coding.基于 DIBR 的多视点编码中的参考视图选择。
IEEE Trans Image Process. 2016 Apr;25(4):1808-19. doi: 10.1109/TIP.2016.2530303. Epub 2016 Feb 15.
8
Plenoptic Image Coding using Macropixel-based Intra Prediction.
IEEE Trans Image Process. 2018 May 2. doi: 10.1109/TIP.2018.2832449.
9
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.利用稀疏集和视差进行全光图像的可扩展编码。
IEEE Trans Image Process. 2016 Jan;25(1):80-91. doi: 10.1109/TIP.2015.2498406. Epub 2015 Nov 5.
10
Encoder-Driven Inpainting Strategy in Multiview Video Compression.基于编解码器驱动的多视角视频压缩中的插补策略。
IEEE Trans Image Process. 2016 Jan;25(1):134-49. doi: 10.1109/TIP.2015.2498400. Epub 2015 Nov 5.