Li Xu, Shen Yihao, Meng Qifu, Xing Mingyi, Zhang Qiushuang, Yang Hualin
College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China.
Hexagon Manufacturing Intelligence Technology (Qingdao) Co., Qingdao 266101, China.
Sensors (Basel). 2025 Mar 1;25(5):1532. doi: 10.3390/s25051532.
A drawback of fringe projection profilometry (FPP) is that it is still a challenge to perform efficient and accurate high-resolution absolute phase recovery with only a single measurement. This paper proposes a single-model self-recovering fringe projection absolute phase recovery method based on deep learning. The built Fringe Prediction Self-Recovering network converts a single fringe image acquired by a camera into four single mode self-recovering fringe images. A self-recovering algorithm is adopted to obtain wrapped phases and fringe grades, realizing high-resolution absolute phase recovery from only a single shot. Low-cost and efficient dataset preparation is realized by the constructed virtual measurement system. The fringe prediction network showed good robustness and generalization ability in experiments with multiple scenarios using different lighting conditions in both virtual and physical measurement systems. The absolute phase recovered MAE in the real physical measurement system was controlled to be 0.015 rad, and the reconstructed point cloud fitting RMSE was 0.02 mm. It was experimentally verified that the proposed method can achieve efficient and accurate absolute phase recovery under complex ambient lighting conditions. Compared with the existing methods, the method in this paper does not need the assistance of additional modes to process the high-resolution fringe images directly. Combining the deep learning technique with the self-recovering algorithm simplified the complex process of phase retrieval and phase unwrapping, and the proposed method is simpler and more efficient, which provides a reference for the fast, lightweight, and online detection of FPP.
条纹投影轮廓术(FPP)的一个缺点是,仅通过一次测量来执行高效且准确的高分辨率绝对相位恢复仍然是一项挑战。本文提出了一种基于深度学习的单模型自恢复条纹投影绝对相位恢复方法。所构建的条纹预测自恢复网络将相机采集的单个条纹图像转换为四个单模自恢复条纹图像。采用自恢复算法来获取包裹相位和条纹等级,仅通过一次拍摄即可实现高分辨率绝对相位恢复。通过构建的虚拟测量系统实现了低成本且高效的数据集准备。条纹预测网络在虚拟和物理测量系统中使用不同光照条件的多种场景实验中表现出良好的鲁棒性和泛化能力。在实际物理测量系统中恢复的绝对相位MAE被控制在0.015弧度,重建点云拟合RMSE为0.02毫米。实验验证了所提出的方法能够在复杂的环境光照条件下实现高效且准确的绝对相位恢复。与现有方法相比,本文方法无需额外模式的辅助即可直接处理高分辨率条纹图像。将深度学习技术与自恢复算法相结合简化了相位检索和相位解包裹的复杂过程,所提方法更简单高效,为FPP的快速、轻量化和在线检测提供了参考。