University of Kentucky.
The Ohio State University.
IEEE Trans Vis Comput Graph. 2014 Apr;20(4):550-9. doi: 10.1109/TVCG.2014.35.
We present a system that allows the user to virtually try on new clothes. It uses a single commodity depth camera to capture the user in 3D. Both the pose and the shape of the user are estimated with a novel real-time template-based approach that performs tracking and shape adaptation jointly. The result is then used to drive realistic cloth simulation, in which the synthesized clothes are overlayed on the input image. The main challenge is to handle missing data and pose ambiguities due to the monocular setup, which captures less than 50 percent of the full body. Our solution is to incorporate automatic shape adaptation and novel constraints in pose tracking. The effectiveness of our system is demonstrated with a number of examples.
我们提出了一个系统,允许用户虚拟试穿新衣服。它使用单个商品深度相机以 3D 方式捕捉用户。用户的姿势和形状都使用一种新颖的实时基于模板的方法进行估计,该方法联合执行跟踪和形状适配。然后,将结果用于驱动逼真的布料模拟,其中合成的衣服叠加在输入图像上。主要的挑战是处理由于单目设置导致的缺失数据和姿势歧义,该设置仅捕获不到全身的 50%。我们的解决方案是在姿势跟踪中自动进行形状适配和新的约束。通过一些示例展示了我们系统的有效性。