College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China.
Key Laboratory of Wireless Power Transmission of Ministry of Education, Sichuan University, Chengdu 610065, China.
Phys Rev E. 2019 Sep;100(3-1):033308. doi: 10.1103/PhysRevE.100.033308.
Porous media are ubiquitous in both nature and engineering applications. Therefore, their modeling and understanding is of vital importance. In contrast to direct acquisition of three-dimensional (3D) images of this type of medium, obtaining its subregion (s) such as 2D images or several small areas can be feasible. Therefore, reconstructing whole images from limited information is a primary technique in these types of cases. Given data in practice cannot generally be determined by users and may be incomplete or only partially informed, thus making existing reconstruction methods inaccurate or even ineffective. To overcome this shortcoming, in this study we propose a deep-learning-based framework for reconstructing full images from their much smaller subareas. In particular, conditional generative adversarial network is utilized to learn the mapping between the input (a partial image) and output (a full image). To ensure the reconstruction accuracy, two simple but effective objective functions are proposed and then coupled with the other two functions to jointly constrain the training procedure. Because of the inherent essence of this ill-posed problem, a Gaussian noise is introduced for producing reconstruction diversity, thus enabling the network to provide multiple candidate outputs. Our method is extensively tested on a variety of porous materials and validated by both visual inspection and quantitative comparison. It is shown to be accurate, stable, and even fast (∼0.08 s for a 128×128 image reconstruction). The proposed approach can be readily extended by, for example, incorporating user-defined conditional data and an arbitrary number of object functions into reconstruction, while being coupled with other reconstruction methods.
多孔介质在自然界和工程应用中无处不在。因此,对其进行建模和理解至关重要。与直接获取这种介质的三维(3D)图像相比,获取其子区域(例如 2D 图像或几个小区域)可能是可行的。因此,从有限的信息中重建整个图像是这些类型情况下的主要技术。由于实践中的数据通常不能由用户确定,并且可能不完整或仅部分了解,因此现有的重建方法可能不准确甚至无效。为了克服这一缺点,本研究提出了一种基于深度学习的从其更小的子区域重建全图像的框架。具体来说,条件生成对抗网络用于学习输入(部分图像)和输出(全图像)之间的映射。为了确保重建准确性,提出了两个简单但有效的目标函数,然后将它们与另外两个函数耦合,以共同约束训练过程。由于这个不适定问题的内在本质,引入了高斯噪声以产生重建多样性,从而使网络能够提供多个候选输出。我们的方法在各种多孔材料上进行了广泛的测试,并通过视觉检查和定量比较进行了验证。结果表明,该方法准确、稳定,甚至快速(重建 128×128 图像约 0.08 秒)。该方法可以通过例如将用户定义的条件数据和任意数量的目标函数合并到重建中,并与其他重建方法相结合来轻松扩展。