Center for Biomedical-photonics and Molecular Imaging, Advanced Diagnostic-Therapy Technology and Equipment Key Laboratory of Higher Education Institutions in Shaanxi Province, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi 710126, People's Republic of China.
Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi 710126, People's Republic of China.
Phys Med Biol. 2024 Mar 21;69(7). doi: 10.1088/1361-6560/ad2ca3.
. The reconstruction of three-dimensional optical imaging that can quantitatively acquire the target distribution from surface measurements is a serious ill-posed problem. Traditional regularization-based reconstruction can solve such ill-posed problem to a certain extent, but its accuracy is highly dependent oninformation, resulting in a less stable and adaptable method. Data-driven deep learning-based reconstruction avoids the errors of light propagation models and the reliance on experience and a prior by learning the mapping relationship between the surface light distribution and the target directly from the dataset. However, the acquisition of the training dataset and the training of the network itself are time consuming, and the high dependence of the network performance on the training dataset results in a low generalization ability. The objective of this work is to develop a highly robust reconstruction framework to solve the existing problems.. This paper proposes a physical model constrained neural networks-based reconstruction framework. In the framework, the neural networks are to generate a target distribution from surface measurements, while the physical model is used to calculate the surface light distribution based on this target distribution. The mean square error between the calculated surface light distribution and the surface measurements is then used as a loss function to optimize the neural network. To further reduce the dependence oninformation, a movable region is randomly selected and then traverses the entire solution interval. We reconstruct the target distribution in this movable region and the results are used as the basis for its next movement.. The performance of the proposed framework is evaluated with a series of simulations andexperiment, including accuracy robustness of different target distributions, noise immunity, depth robustness, and spatial resolution. The results collectively demonstrate that the framework can reconstruct targets with a high accuracy, stability and versatility.. The proposed framework has high accuracy and robustness, as well as good generalizability. Compared with traditional regularization-based reconstruction methods, it eliminates the need to manually delineate feasible regions and adjust regularization parameters. Compared with emerging deep learning assisted methods, it does not require any training dataset, thus saving a lot of time and resources and solving the problem of poor generalization and robustness of deep learning methods. Thus, the framework opens up a new perspective for the reconstruction of three-dimension optical imaging.
. 从表面测量中定量获取目标分布的三维光学成像重建是一个严重的不适定问题。传统的基于正则化的重建方法可以在一定程度上解决此类不适定问题,但它的准确性高度依赖于信息,导致方法不够稳定和适应性强。基于数据驱动的深度学习的重建方法通过直接从数据集学习表面光分布和目标之间的映射关系来避免光传播模型的误差和对经验和先验的依赖。然而,训练数据集的获取和网络本身的训练都非常耗时,并且网络性能对训练数据集的高度依赖导致了较低的泛化能力。本工作的目的是开发一种高度稳健的重建框架,以解决现有问题。. 本文提出了一种基于物理模型约束神经网络的重建框架。在该框架中,神经网络从表面测量中生成目标分布,而物理模型则用于根据该目标分布计算表面光分布。然后,将计算出的表面光分布与表面测量值之间的均方误差用作损失函数来优化神经网络。为了进一步减少对信息的依赖,随机选择一个可移动区域,然后遍历整个解区间。我们在这个可移动区域中重建目标分布,并将结果用作其下一步运动的基础。. 采用一系列模拟和实验对所提出的框架进行了性能评估,包括不同目标分布的准确性稳健性、噪声鲁棒性、深度稳健性和空间分辨率。结果表明,该框架可以以高精度、稳定性和多功能性重建目标。. 所提出的框架具有高精度和鲁棒性,以及良好的通用性。与传统的基于正则化的重建方法相比,它无需手动划定可行区域和调整正则化参数。与新兴的深度学习辅助方法相比,它不需要任何训练数据集,从而节省了大量的时间和资源,并解决了深度学习方法泛化能力差和鲁棒性差的问题。因此,该框架为三维光学成像的重建开辟了一个新的视角。