Department of Engineering, University of Cambridge, Cambridge, UK.
Dyson School of Design Engineering, Imperial College London, London, UK.
Sci Rep. 2022 Jul 22;12(1):12592. doi: 10.1038/s41598-022-16643-z.
Realtime visual feedback from consequences of actions is useful for future safety-critical human-robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human-robot interaction in remote examination remains understudied. Here we describe a face mediated human-robot interaction approach for remote palpation. It builds upon a robodoctor-robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human-robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human-robot interaction in remote medical examinations.
实时反馈行动后果的视觉信息对于未来的安全关键型人机交互应用非常有用,例如对患者进行远程体检。虽然有多种呈现视觉反馈的格式,但在远程检查中使用面部作为调节人机交互的反馈仍然研究不足。在这里,我们描述了一种用于远程触诊的基于面部的人机交互方法。它建立在一个机器人医生-机器人患者平台之上,用户可以在机器人患者上进行触诊,从而远程控制机器人医生对患者进行诊断。一个安装在机器人医生末端执行器上的触觉传感器阵列测量被诊断患者的触觉响应,并将其传输到机器人患者,以响应触诊力呈现疼痛的面部表情。我们将这种方法与在视觉触觉图中直接呈现触觉传感器数据的方法进行了比较。作为反馈,前者具有利用人类高级能力解码面部表情的优势,而后者则具有呈现触诊的强度和空间信息等细节的优势。在一项用户研究中,我们在远程触诊任务中比较了这两种方法,以在远程腹部模拟体中找到嵌入的硬性结节。我们表明,基于面部的人机交互方法可以在不影响结节位置估计时间的情况下,显著提高定位硬性结节的准确性。我们强调了面部表情作为交流信号的固有力量,可以增强远程医疗检查中人机交互的实用性和有效性。