Dehghani Shervin, Sommersperger Michael, Zhang Peiyao, Martin-Gomez Alejandro, Busam Benjamin, Gehlbach Peter, Navab Nassir, Nasseri M Ali, Iordachita Iulian
Department of Computer Science, Technische Universität München, München 85748 Germany.
Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
IEEE Int Conf Robot Autom. 2023 May-Jun;2023:4724-4731. doi: 10.1109/icra48891.2023.10160372. Epub 2023 Jul 4.
In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce , a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.
在过去十年中,已经引入了各种机器人平台,它们可以支持精细的视网膜手术。与此同时,为了提供手术区域的语义理解,最近的进展使得显微镜集成术中光学相干断层扫描(iOCT)能够以接近视频的速率进行高分辨率3D成像。机器人技术和语义理解的结合实现了机器人视网膜手术中的任务自主性,例如视网膜下注射。该手术需要精确的针插入以获得最佳治疗效果。然而,将机器人系统与iOCT相结合带来了新的挑战。这些挑战包括但不限于对数据处理速率的高要求以及在手术过程中这些系统的动态配准。在这项工作中,我们基于iOCT体积的智能实时处理,提出了一种用于视网膜下注射的自主机器人导航框架。我们的方法包括一种器械位姿估计方法、机器人与iOCT系统之间的在线配准以及为导航到注射目标而定制的轨迹规划。我们还介绍了一种通过卷积神经网络(CNN)实现的用于快速器械位姿估计的体积切片方法。我们在离体猪眼上进行的实验证明了该方法的精度和可重复性。最后,我们讨论了这项工作中确定的挑战,并提出了潜在的解决方案,以推动此类系统的进一步发展。