Kim Ji Woong, Zhang Peiyao, Gehlbach Peter, Iordachita Iulian, Kobilarov Marin
Department of Mechanical Engineering, Johns Hopkins University.
Wilmer Eye Institute, Johns Hopkins University School of Medicine.
Proc Mach Learn Res. 2021;155:2347-2358.
During retinal microsurgery, precise manipulation of the delicate retinal tissue is required for positive surgical outcome. However, accurate manipulation and navigation of surgical tools remain difficult due to a constrained workspace and the top-down view during the surgery, which limits the surgeon's ability to estimate depth. To alleviate such difficulty, we propose to automate the tool-navigation task by learning to predict relative goal position on the retinal surface from the current tool-tip position. Given an estimated target on the retina, we generate an optimal trajectory leading to the predicted goal while imposing safety-related physical constraints aimed to minimize tissue damage. As an extended task, we generate goal predictions to various points across the retina to localize eye geometry and further generate safe trajectories within the estimated confines. Through experiments in both simulation and with several eye phantoms, we demonstrate that our framework can permit navigation to various points on the retina within 0.089mm and 0.118mm in xy error which is less than the human's surgeon mean tremor at the tool-tip of 0.180mm. All safety constraints were fulfilled and the algorithm was robust to previously unseen eyes as well as unseen objects in the scene. Live video demonstration is available here: https://youtu.be/n5j5jCCelXk.
在视网膜显微手术中,为了获得良好的手术效果,需要对脆弱的视网膜组织进行精确操作。然而,由于手术空间受限以及手术过程中的俯视视角,手术工具的精确操作和导航仍然具有挑战性,这限制了外科医生估计深度的能力。为了缓解这一难题,我们建议通过学习从当前工具尖端位置预测视网膜表面上的相对目标位置,实现工具导航任务的自动化。给定视网膜上的估计目标,我们生成一条通向预测目标的最优轨迹,同时施加与安全相关的物理约束,以尽量减少组织损伤。作为一项扩展任务,我们生成视网膜上各点的目标预测,以定位眼睛几何形状,并进一步在估计范围内生成安全轨迹。通过在模拟环境和几个眼模型上进行的实验,我们证明了我们的框架能够在xy误差0.089mm至0.118mm范围内导航到视网膜上的各个点,该误差小于人类外科医生在工具尖端的平均震颤0.180mm。所有安全约束均得到满足,并且该算法对于之前未见过的眼睛以及场景中未见过的物体具有鲁棒性。实时视频演示可在此处观看:https://youtu.be/n5j5jCCelXk 。