• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能驱动的质量检测中的合成训练数据:相机、照明和噪声参数的重要性。

Synthetic Training Data in AI-Driven Quality Inspection: The Significance of Camera, Lighting, and Noise Parameters.

作者信息

Schraml Dominik, Notni Gunther

机构信息

Group for Quality Assurance and Industrial Image Processing, Ilmenau University of Technology, 98639 Ilmenau, Germany.

Steinbeis Qualitätssicherung und Bildverarbeitung GmbH, 98693 Ilmenau, Germany.

出版信息

Sensors (Basel). 2024 Jan 19;24(2):649. doi: 10.3390/s24020649.

DOI:10.3390/s24020649
PMID:38276341
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10820774/
Abstract

Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. Although the creation of realistic synthetic 2D images from 3D models of injection-molded parts involves numerous rendering parameters, the current literature on the generation and application of synthetic data in industrial-quality inspection scarcely addresses the impact of these parameters on AI efficacy. In this study, we delve into some of these key parameters, such as camera position, lighting, and computational noise, to gauge their effect on AI performance. By utilizing Blender software, we procedurally introduced the "flash" defect on a 3D model sourced from a CAD file of an injection-molded part. Subsequently, with Blender's Cycles rendering engine, we produced datasets for each parameter variation. These datasets were then used to train a pre-trained EfficientNet-V2 for the binary classification of the "flash" defect. Our results indicate that while noise is less critical, using a range of noise levels in training can benefit model adaptability and efficiency. Variability in camera positioning and lighting conditions was found to be more significant, enhancing model performance even when real-world conditions mirror the controlled synthetic environment. These findings suggest that incorporating diverse lighting and camera dynamics is beneficial for AI applications, regardless of the consistency in real-world operational settings.

摘要

工业质量检测,尤其是那些利用人工智能的检测,需要大量的训练数据。在注塑成型等领域,为获取此类数据而生产大量有缺陷的零件会带来环境和财务方面的挑战。合成训练数据成为解决这些问题的一种潜在解决方案。尽管从注塑零件的3D模型创建逼真的合成2D图像涉及众多渲染参数,但当前关于工业质量检测中合成数据的生成和应用的文献几乎没有涉及这些参数对人工智能效能的影响。在本研究中,我们深入研究了一些关键参数,如相机位置、光照和计算噪声,以评估它们对人工智能性能的影响。通过使用Blender软件,我们在一个从注塑零件的CAD文件获取的3D模型上程序式地引入了“飞边”缺陷。随后,利用Blender的Cycles渲染引擎,我们为每个参数变化生成了数据集。然后,这些数据集被用于训练一个预训练的EfficientNet-V2,用于对“飞边”缺陷进行二元分类。我们的结果表明,虽然噪声的影响较小,但在训练中使用一系列噪声水平可以提高模型的适应性和效率。发现相机位置和光照条件的变化更为显著,即使现实世界条件与受控的合成环境相似,也能提高模型性能。这些发现表明,纳入多样化的光照和相机动态变化对人工智能应用有益,无论现实世界操作设置中的一致性如何。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9863f0daa8e4/sensors-24-00649-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/06d6967a42d0/sensors-24-00649-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9290a560fed0/sensors-24-00649-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9d994cf98248/sensors-24-00649-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/f70a64a7b35e/sensors-24-00649-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/30f61e0c14e0/sensors-24-00649-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/c091009da739/sensors-24-00649-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/912d3f5ae484/sensors-24-00649-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/3c69a5d39e7d/sensors-24-00649-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/f32334a0739d/sensors-24-00649-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/ce50ebad0592/sensors-24-00649-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/a89e901f2903/sensors-24-00649-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/5721b8339315/sensors-24-00649-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9863f0daa8e4/sensors-24-00649-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/06d6967a42d0/sensors-24-00649-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9290a560fed0/sensors-24-00649-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9d994cf98248/sensors-24-00649-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/f70a64a7b35e/sensors-24-00649-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/30f61e0c14e0/sensors-24-00649-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/c091009da739/sensors-24-00649-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/912d3f5ae484/sensors-24-00649-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/3c69a5d39e7d/sensors-24-00649-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/f32334a0739d/sensors-24-00649-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/ce50ebad0592/sensors-24-00649-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/a89e901f2903/sensors-24-00649-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/5721b8339315/sensors-24-00649-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7682/10820774/9863f0daa8e4/sensors-24-00649-g006.jpg

相似文献

1
Synthetic Training Data in AI-Driven Quality Inspection: The Significance of Camera, Lighting, and Noise Parameters.人工智能驱动的质量检测中的合成训练数据:相机、照明和噪声参数的重要性。
Sensors (Basel). 2024 Jan 19;24(2):649. doi: 10.3390/s24020649.
2
Render lighting dataset: A collection of rendered images with varied lighting conditions using blender render engines.渲染光照数据集:使用Blender渲染引擎生成的具有不同光照条件的渲染图像集合。
Data Brief. 2024 Mar 16;54:110331. doi: 10.1016/j.dib.2024.110331. eCollection 2024 Jun.
3
PASMVS: A perfectly accurate, synthetic, path-traced dataset featuring specular material properties for multi-view stereopsis training and reconstruction applications.PASMVS:一个完美精确的合成路径跟踪数据集,具有用于多视图立体视觉训练和重建应用的镜面材料属性。
Data Brief. 2020 Aug 24;32:106219. doi: 10.1016/j.dib.2020.106219. eCollection 2020 Oct.
4
Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds.三维合成环境中基于物理的图形模型作为基于视觉的自主检测试验台
Sensors (Basel). 2022 Jan 11;22(2):532. doi: 10.3390/s22020532.
5
Bridging the simulation-to-real gap for AI-based needle and target detection in robot-assisted ultrasound-guided interventions.弥合基于人工智能的针和目标检测在机器人辅助超声引导介入中的模拟与现实之间的差距。
Eur Radiol Exp. 2023 Jun 19;7(1):30. doi: 10.1186/s41747-023-00344-x.
6
Generating Images with Physics-Based Rendering for an Industrial Object Detection Task: Realism versus Domain Randomization.基于物理渲染生成图像用于工业目标检测任务:真实感与域随机化
Sensors (Basel). 2021 Nov 26;21(23):7901. doi: 10.3390/s21237901.
7
Industry 4.0 In-Line AI Quality Control of Plastic Injection Molded Parts.工业4.0对注塑成型零件的在线人工智能质量控制
Polymers (Basel). 2022 Aug 29;14(17):3551. doi: 10.3390/polym14173551.
8
Leveraging 3D Echocardiograms to Evaluate AI Model Performance in Predicting Cardiac Function on Out-of-Distribution Data.利用三维超声心动图评估人工智能模型在预测分布外数据心脏功能方面的性能。
Pac Symp Biocomput. 2024;29:39-52.
9
2D medical image synthesis using transformer-based denoising diffusion probabilistic model.基于变换的去噪扩散概率模型的 2D 医学图像合成。
Phys Med Biol. 2023 May 5;68(10):105004. doi: 10.1088/1361-6560/acca5c.
10
BLAINDER-A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data.BLAINDER-A Blender AI 插件,用于生成语义标记深度感应数据。
Sensors (Basel). 2021 Mar 18;21(6):2144. doi: 10.3390/s21062144.

本文引用的文献

1
Procedural Defect Modeling for Virtual Surface Inspection Environments.虚拟表面检测环境的程序缺陷建模
IEEE Comput Graph Appl. 2023 Mar-Apr;43(2):13-22. doi: 10.1109/MCG.2023.3243276. Epub 2023 Mar 23.
2
Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds.三维合成环境中基于物理的图形模型作为基于视觉的自主检测试验台
Sensors (Basel). 2022 Jan 11;22(2):532. doi: 10.3390/s22020532.
3
Synthetic dataset generation for object-to-model deep learning in industrial applications.
用于工业应用中对象到模型深度学习的合成数据集生成。
PeerJ Comput Sci. 2019 Oct 14;5:e222. doi: 10.7717/peerj-cs.222. eCollection 2019.