Haptic Engineering Research Laboratory, Department of Information and Telecommunication Engineering, Incheon National University, Incheon, Korea.
3D Information Processing Laboratory, Korea University, Seoul, Korea.
Skin Res Technol. 2019 Jul;25(4):469-481. doi: 10.1111/srt.12675. Epub 2019 Jan 9.
Haptic skin palpation with three-dimensional skin surface reconstruction from in vivo skin images in order to acquire both tactile and visual information has been receiving much attention. However, the depth estimation of skin surface, using a light field camera that creates multiple images with a micro-lens array, is a difficult problem due to low-resolution images resulting in erroneous disparity matching.
Multiple low-resolution images decoded from a light field camera have limitations to accurate 3D surface reconstruction needed for haptic palpation. To overcome this, a deep learning method, Generative Adversarial Networks, was employed to generate super-resolved skin images that preserve surface detail without blurring, and then, accurate skin depth was estimated by taking multiple subsequent steps including lens distortion correction, sub-pixel shifted image generation using phase shift theorem, cost-volume building, multi-label optimization, and hole filling and refinement, which is a new approach for 3D skin surface reconstruction.
Experimental results of the deep-learning-based super-resolution method demonstrated that the textural detail (wrinkles) of super-resolved skin images is well preserved, unlike other super-resolution methods. In addition, the depth maps computed with our proposed algorithm verify that our method can produce more accurate and robust results compared to other state-of-the-art depth map computation methods.
Herein, we first proposed depth map estimation of skin surfaces using a light field camera and subsequently tested it with several skin images. The experimental results established the superiority of the proposed scheme.
为了获取触觉和视觉信息,已经有研究使用三维皮肤表面重建对活体皮肤图像进行触觉皮肤触诊。然而,由于使用微透镜阵列的光场相机生成的低分辨率图像会导致视差匹配错误,因此皮肤表面的深度估计是一个难题。
从光场相机解码的多个低分辨率图像对触觉触诊所需的精确 3D 表面重建有局限性。为了克服这一问题,采用了深度学习方法生成对抗网络(GAN)来生成超分辨率皮肤图像,在不模糊的情况下保留表面细节,然后通过多个后续步骤(包括镜头失真校正、使用相移定理生成亚像素移位图像、构建代价体、多标签优化以及孔填充和细化)准确估计皮肤深度,这是一种新的 3D 皮肤表面重建方法。
基于深度学习的超分辨率方法的实验结果表明,与其他超分辨率方法相比,超分辨率皮肤图像的纹理细节(皱纹)得到了很好的保留。此外,我们提出的算法计算出的深度图验证了与其他最先进的深度图计算方法相比,我们的方法可以产生更准确和稳健的结果。
本文首次提出了使用光场相机进行皮肤表面深度图估计的方法,并对几种皮肤图像进行了测试。实验结果证明了所提出方案的优越性。