Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan 701, Taiwan.
Sensors (Basel). 2021 Apr 7;21(8):2587. doi: 10.3390/s21082587.
Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.
人脸识别近年来随着人工智能(AI)技术的快速发展而受到越来越多的关注。然而,大多数关于面部重建和识别的相关工作主要基于大数据采集和图像深度学习相关算法。基于数据驱动的 AI 方法不可避免地增加了 CPU 的计算复杂度,并且通常高度依赖 GPU 能力。基于 RGB 的人脸识别的一个典型问题是其在低光照或黑暗环境中的适用性。为了解决这个问题,本文提出了一种通过使用深度传感器进行面部重建和识别的有效方法。对于每个测试候选者,深度相机获取其 3D 点云的多视图。通过使用迭代最近点(ICP)对点云集进行拼接,以进行 3D 模型重建。然后,设计了一个分割过程来将模型集分为身体部分和头部部分。基于分割后的 3D 人脸点云,然后提取某些面部特征用于识别评分。从深度传感器拍摄单个镜头,点云数据将与其他 3D 人脸模型进行注册,以确定数据属于哪个最佳候选者。通过使用基于特征的 3D 面部相似性评分算法,该算法由不同点云之间的法线、曲率和注册相似性组成,即使在黑暗环境中也可以正确标记人员。所提出的方法适用于配备有微小深度相机的智能手机和平板电脑等智能设备。使用真实世界数据的实验表明,所提出的方法能够重建更密集的模型并实现基于点云的 3D 人脸识别。