Phan Johan, Sarmad Muhammad, Ruspini Leonardo, Kiss Gabriel, Lindseth Frank
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
Petricore Norway, Trondheim, Norway.
Sci Rep. 2024 Mar 18;14(1):6498. doi: 10.1038/s41598-024-56910-9.
Three-dimensional (3D) images provide a comprehensive view of material microstructures, enabling numerical simulations unachievable with two-dimensional (2D) imaging alone. However, obtaining these 3D images can be costly and constrained by resolution limitations. We introduce a novel method capable of generating large-scale 3D images of material microstructures, such as metal or rock, from a single 2D image. Our approach circumvents the need for 3D image data while offering a cost-effective, high-resolution alternative to existing imaging techniques. Our method combines a denoising diffusion probabilistic model with a generative adversarial network framework. To compensate for the lack of 3D training data, we implement chain sampling, a technique that utilizes the 3D intermediate outputs obtained by reversing the diffusion process. During the training phase, these intermediate outputs are guided by a 2D discriminator. This technique facilitates our method's ability to gradually generate 3D images that accurately capture the geometric properties and statistical characteristics of the original 2D input. This study features a comparative analysis of the 3D images generated by our method, SliceGAN (the current state-of-the-art method), and actual 3D micro-CT images, spanning a diverse set of rock and metal types. The results shown an improvement of up to three times in the Frechet inception distance score, a typical metric for evaluating the performance of image generative models, and enhanced accuracy in derived properties compared to SliceGAN. The potential of our method to produce high-resolution and statistically representative 3D images paves the way for new applications in material characterization and analysis domains.
三维(3D)图像提供了材料微观结构的全面视图,使得仅用二维(2D)成像无法实现的数值模拟成为可能。然而,获取这些3D图像可能成本高昂且受分辨率限制。我们引入了一种新颖的方法,能够从单个2D图像生成材料微观结构(如金属或岩石)的大规模3D图像。我们的方法无需3D图像数据,同时为现有成像技术提供了一种经济高效、高分辨率的替代方案。我们的方法将去噪扩散概率模型与生成对抗网络框架相结合。为了弥补3D训练数据的不足,我们实施了链采样,这是一种利用通过反转扩散过程获得的3D中间输出的技术。在训练阶段,这些中间输出由2D鉴别器引导。该技术有助于我们的方法逐步生成能够准确捕捉原始2D输入的几何特性和统计特征的3D图像。本研究对我们的方法、SliceGAN(当前最先进的方法)以及实际3D微CT图像生成的3D图像进行了比较分析,涵盖了多种岩石和金属类型。结果显示,在用于评估图像生成模型性能的典型指标Frechet初始距离得分方面提高了多达三倍,并且与SliceGAN相比,导出属性的准确性有所提高。我们的方法生成高分辨率和具有统计代表性的3D图像的潜力为材料表征和分析领域的新应用铺平了道路。