Alabay Husnu Halid, Le Tuan-Anh, Ceylan Hakan
Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Scottsdale, AZ, United States.
Max Planck Queensland Centre, Queensland University of Technology, Brisbane, QLD, Australia.
Front Robot AI. 2024 Nov 13;11:1495445. doi: 10.3389/frobt.2024.1495445. eCollection 2024.
In developing medical interventions using untethered milli- and microrobots, ensuring safety and effectiveness relies on robust methods for real-time robot detection, tracking, and precise localization within the body. The inherent non-transparency of human tissues significantly challenges these efforts, as traditional imaging systems like fluoroscopy often lack crucial anatomical details, potentially compromising intervention safety and efficacy. To address this technological gap, in this study, we build a virtual reality environment housing an exact digital replica (digital twin) of the operational workspace and a robot avatar. We synchronize the virtual and real workspaces and continuously send the robot position data derived from the image stream into the digital twin with short average delay time around 20-25 ms. This allows the operator to steer the robot by tracking its avatar within the digital twin with near real-time temporal resolution. We demonstrate the feasibility of this approach with millirobots steered in confined phantoms. Our concept demonstration herein can pave the way for not only improved procedural safety by complementing fluoroscopic guidance with virtual reality enhancement, but also provides a platform for incorporating various additional real-time derivative data, e.g., instantaneous robot velocity, intraoperative physiological data obtained from the patient, e.g., blood flow rate, and pre-operative physical simulation models, e.g., periodic body motions, to further refine robot control capacity.
在开发使用无系绳微型和微型机器人的医疗干预措施时,确保安全性和有效性依赖于强大的方法,用于在体内进行实时机器人检测、跟踪和精确定位。人体组织固有的不透明性给这些努力带来了重大挑战,因为像荧光透视这样的传统成像系统往往缺乏关键的解剖细节,这可能会损害干预的安全性和有效性。为了弥补这一技术差距,在本研究中,我们构建了一个虚拟现实环境,其中包含操作工作空间的精确数字复制品(数字孪生)和机器人化身。我们同步虚拟和真实工作空间,并将从图像流中获取的机器人位置数据以平均延迟时间约20 - 25毫秒的短延迟持续发送到数字孪生中。这使得操作员能够通过在数字孪生中跟踪其化身,以接近实时的时间分辨率来操纵机器人。我们在受限模型中对微型机器人操纵展示了这种方法的可行性。我们在此的概念验证不仅可以通过虚拟现实增强补充荧光透视引导来提高手术安全性,还提供了一个平台,用于纳入各种额外的实时衍生数据,例如瞬时机器人速度、从患者获得的术中生理数据(例如血流速度)以及术前物理模拟模型(例如周期性身体运动),以进一步完善机器人控制能力。