• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

计算机生成图像中保真度量化的全面探索

A Comprehensive Exploration of Fidelity Quantification in Computer-Generated Images.

作者信息

Duminil Alexandra, Ieng Sio-Song, Gruyer Dominique

机构信息

Department of Components and Systems (COSYS)/Perceptions, Interactions, Behaviour and Simulations of Road and Street Users Laboratory (PICS-L)/Gustave Eiffel University, F-77454 Marne-la-Vallée, France.

出版信息

Sensors (Basel). 2024 Apr 11;24(8):2463. doi: 10.3390/s24082463.

DOI:10.3390/s24082463
PMID:38676079
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11054344/
Abstract

Generating realistic road scenes is crucial for advanced driving systems, particularly for training deep learning methods and validation. Numerous efforts aim to create larger and more realistic synthetic datasets using graphics engines or synthetic-to-real domain adaptation algorithms. In the realm of computer-generated images (CGIs), assessing fidelity is challenging and involves both objective and subjective aspects. Our study adopts a comprehensive conceptual framework to quantify the fidelity of RGB images, unlike existing methods that are predominantly application-specific. This is probably due to the data complexity and huge range of possible situations and conditions encountered. In this paper, a set of distinct metrics assessing the level of fidelity of virtual RGB images is proposed. For quantifying image fidelity, we analyze both local and global perspectives of texture and the high-frequency information in images. Our focus is on the statistical characteristics of realistic and synthetic road datasets, using over 28,000 images from at least 10 datasets. Through a thorough examination, we aim to reveal insights into texture patterns and high-frequency components contributing to the objective perception of data realism in road scenes. This study, exploring image fidelity in both virtual and real conditions, takes the perspective of an embedded camera rather than the human eye. The results of this work, including a pioneering set of objective scores applied to real, virtual, and improved virtual data, offer crucial insights and are an asset for the scientific community in quantifying fidelity levels.

摘要

生成逼真的道路场景对于先进的驾驶系统至关重要,特别是对于深度学习方法的训练和验证。许多努力旨在使用图形引擎或合成到真实域适应算法来创建更大、更逼真的合成数据集。在计算机生成图像(CGI)领域,评估逼真度具有挑战性,涉及客观和主观两个方面。我们的研究采用了一个全面的概念框架来量化RGB图像的逼真度,这与现有的主要针对特定应用的方法不同。这可能是由于数据的复杂性以及所遇到的可能情况和条件范围巨大。本文提出了一组不同的指标来评估虚拟RGB图像的逼真度水平。为了量化图像逼真度,我们从局部和全局角度分析了图像中的纹理和高频信息。我们的重点是现实和合成道路数据集的统计特征,使用了来自至少10个数据集的超过28000张图像。通过深入研究,我们旨在揭示有助于道路场景中数据现实感客观感知的纹理模式和高频成分的见解。这项研究从嵌入式摄像头而非人眼的角度探索虚拟和真实条件下的图像逼真度。这项工作的结果,包括应用于真实、虚拟和改进虚拟数据的一套开创性的客观分数,提供了关键见解,是科学界量化逼真度水平的一项宝贵资产。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/eacb8f2444b8/sensors-24-02463-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/375de098ca90/sensors-24-02463-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/1f6bb83d06c6/sensors-24-02463-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/502af2b40f6b/sensors-24-02463-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/6ffe6e5160af/sensors-24-02463-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/315859e15eb5/sensors-24-02463-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/ac6ba4995d4e/sensors-24-02463-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/4b3825f789d9/sensors-24-02463-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/7bb07f459948/sensors-24-02463-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/96605020d2e7/sensors-24-02463-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/eacb8f2444b8/sensors-24-02463-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/375de098ca90/sensors-24-02463-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/1f6bb83d06c6/sensors-24-02463-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/502af2b40f6b/sensors-24-02463-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/6ffe6e5160af/sensors-24-02463-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/315859e15eb5/sensors-24-02463-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/ac6ba4995d4e/sensors-24-02463-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/4b3825f789d9/sensors-24-02463-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/7bb07f459948/sensors-24-02463-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/96605020d2e7/sensors-24-02463-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee2/11054344/eacb8f2444b8/sensors-24-02463-g010.jpg

相似文献

1
A Comprehensive Exploration of Fidelity Quantification in Computer-Generated Images.计算机生成图像中保真度量化的全面探索
Sensors (Basel). 2024 Apr 11;24(8):2463. doi: 10.3390/s24082463.
2
Translational Metabolomics of Head Injury: Exploring Dysfunctional Cerebral Metabolism with Ex Vivo NMR Spectroscopy-Based Metabolite Quantification头部损伤的转化代谢组学:基于体外核磁共振波谱的代谢物定量分析探索脑代谢功能障碍
3
AADS: Augmented autonomous driving simulation using data-driven algorithms.AADS:使用数据驱动算法增强的自动驾驶模拟。
Sci Robot. 2019 Mar 27;4(28). doi: 10.1126/scirobotics.aaw0863.
4
Realistic endoscopic image generation method using virtual-to-real image-domain translation.基于虚拟到真实图像域转换的逼真内镜图像生成方法。
Healthc Technol Lett. 2019 Nov 26;6(6):214-219. doi: 10.1049/htl.2019.0071. eCollection 2019 Dec.
5
Volume rendering of visible human data for an anatomical virtual environment.用于解剖学虚拟环境的可视人体数据的体绘制
Stud Health Technol Inform. 1996;29:352-70.
6
Creating High Fidelity Synthetic Pelvis Radiographs Using Generative Adversarial Networks: Unlocking the Potential of Deep Learning Models Without Patient Privacy Concerns.利用生成对抗网络生成高保真骨盆 X 射线:在不涉及患者隐私问题的情况下挖掘深度学习模型的潜力。
J Arthroplasty. 2023 Oct;38(10):2037-2043.e1. doi: 10.1016/j.arth.2022.12.013. Epub 2022 Dec 17.
7
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility.深度可见光与热图像融合增强行人可见度。
Sensors (Basel). 2019 Aug 28;19(17):3727. doi: 10.3390/s19173727.
8
Improving Semantic Segmentation of Urban Scenes for Self-Driving Cars with Synthetic Images.利用合成图像提高自动驾驶汽车的城市场景语义分割。
Sensors (Basel). 2022 Mar 14;22(6):2252. doi: 10.3390/s22062252.
9
The effects of different levels of realism on the training of CNNs with only synthetic images for the semantic segmentation of robotic instruments in a head phantom.仅使用合成图像对头部体模中机器人器械的语义分割对 CNN 进行训练时,不同逼真度水平的影响。
Int J Comput Assist Radiol Surg. 2020 Aug;15(8):1257-1265. doi: 10.1007/s11548-020-02185-0. Epub 2020 May 22.
10
Observer-study-based approaches to quantitatively evaluate the realism of synthetic medical images.基于观察研究的方法定量评估合成医学图像的真实性。
Phys Med Biol. 2023 Mar 21;68(7):074001. doi: 10.1088/1361-6560/acc0ce.

本文引用的文献

1
Ensemble Transductive Propagation Network for Semi-Supervised Few-Shot Learning.用于半监督少样本学习的集成转导传播网络
Entropy (Basel). 2024 Jan 31;26(2):135. doi: 10.3390/e26020135.
2
Underwater image quality assessment method based on color space multi-feature fusion.基于颜色空间多特征融合的水下图像质量评估方法
Sci Rep. 2023 Oct 6;13(1):16838. doi: 10.1038/s41598-023-44179-3.
3
SWEET: A Realistic Multiwavelength 3D Simulator for Automotive Perceptive Sensors in Foggy Conditions.SWEET:一种用于雾天条件下汽车感知传感器的逼真多波长3D模拟器。
J Imaging. 2023 Feb 20;9(2):54. doi: 10.3390/jimaging9020054.
4
Enhancing Photorealism Enhancement.增强照片般的真实感增强。
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1700-1715. doi: 10.1109/TPAMI.2022.3166687. Epub 2023 Jan 6.