Casas Llogari, Mitchell Kenny
School of Computing, Edinburgh Napier University, Edinburgh, United Kingdom.
Front Robot AI. 2019 Jul 23;6:60. doi: 10.3389/frobt.2019.00060. eCollection 2019.
We introduce (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each person communicates through a toy figurine that is remotely located in front of the other participant. Each person's face is tracked through the front camera of their mobile devices and the tracking pose information is transmitted to the remote participant's device along with the synchronized captured voice audio, allowing a turn-based interactive avatar chat session, which we have called . By altering the camera video feed with a reconstructed appearance of the object in a deformed pose, we perform the illusion of movement in real-world objects to realize collaborative tele-present augmented reality (AR). In this turn based interaction, each participant first sees their own captured puppetry message locally with their device's front facing camera. Next, they receive a view of their counterpart's captured response locally (in AR) with seamless visual deformation of their local 3D toy seen through their device's rear facing camera. We detail optimization of the animation transmission and switching between devices with minimized latency for coherent smooth chat interaction. An evaluation of rendering performance and system latency is included. As an additional demonstration of our framework, we generate facial animation frames for 3D printed stop motion in collaborative mixed reality. This allows a reduction in printing costs since the in-between frames of key poses can be generated digitally with shared remote review.
我们引入了(IR),这是一个用于中间通信的框架,通过在移动介导现实(MR)中赋予实体(如玩具)生命来实现协作。作为双向对话的一部分,每个人通过位于对方参与者面前的远程玩具人偶进行通信。通过移动设备的前置摄像头跟踪每个人的面部,并将跟踪姿态信息与同步捕获的语音音频一起传输到远程参与者的设备,从而实现了我们称之为的基于回合的交互式虚拟化身聊天会话。通过用变形姿态下物体的重建外观改变摄像头视频流,我们在现实世界物体中实现了运动错觉,以实现协作式远程呈现增强现实(AR)。在这种基于回合的交互中,每个参与者首先通过设备的前置摄像头在本地看到自己捕获的木偶消息。接下来,他们通过设备的后置摄像头在本地(以AR形式)接收对方捕获的响应视图,本地3D玩具会有无缝的视觉变形。我们详细介绍了动画传输的优化以及设备之间的切换,以实现具有最小延迟的连贯流畅聊天交互。还包括渲染性能和系统延迟的评估。作为我们框架的额外演示,我们为协作式混合现实中的3D打印定格动画生成面部动画帧。这降低了打印成本,因为关键姿态之间的中间帧可以通过共享远程审查进行数字生成。