Vijayan R C, Han R, Wu P, Sheth N M, Ketcha M D, Vagdargi P, Vogt S, Kleinszig G, Osgood G M, Siewerdsen J H, Uneri A
Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD.
Department of Computer Science, Johns Hopkins University, Baltimore MD.
Proc SPIE Int Soc Opt Eng. 2020 Feb;11315. doi: 10.1117/12.2549713. Epub 2020 Mar 16.
We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms.
The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies.
The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm).
The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.
我们报告了一种基于图像的骨盆骨折固定机器人辅助解决方案的初步开发情况。该方法使用术中X光片、术前CT以及已知设计的末端执行器,使机器人与CT中的目标轨迹对齐。该方法扩展了先前的工作,以从单一放射影像视图(无C形臂旋转)解决机器人与患者的配准问题,并以一种可广泛应用于等中心或非等中心C形臂的形式,解决了在骨科创伤手术中集成机器人辅助相关的工作流程挑战。
所提出的方法使用3D-2D已知组件配准来相对于患者定位机器人末端执行器,方法如下:(1)利用骨盆解剖结构的扩展尺寸和复杂特征来配准患者;(2)使用精确的机器人操作捕获多个末端执行器姿态。这些变换,连同末端执行器的离线手眼校准,用于计算使末端执行器与患者CT中的计划轨迹对齐的目标机器人姿态。在体模研究中独立评估了配准的几何精度,包括患者和机器人。
使用单个(前后位)视图时,骨盆体模的真实值与患者配准之间产生的平移差异为1.3毫米,而使用双(前后位+侧位)视图时为0.4毫米。在空气中(即无背景解剖结构)使用五个独特的末端执行器姿态对机器人进行配准,对于骨盆中克氏针放置,平均平移差异约为1.4毫米,与基于跟踪器的误差范围(通常约为2毫米)相当。
基于患者和机器人配准的准确性,所提出的方法是可行的,并且是开发更自然地适应荧光镜引导的骨科创伤手术工作流程的图像引导机器人引导系统的初步步骤。未来的工作将涉及所提出的引导系统的端到端开发,以及在尸体研究中使用克氏针交付对系统进行评估。