Johns Hopkins University, Baltimore, MD, USA.
Johns Hopkins School of Medicine, Baltimore, MD, USA.
Int J Comput Assist Radiol Surg. 2023 Jun;18(6):1017-1024. doi: 10.1007/s11548-023-02888-0. Epub 2023 Apr 20.
Image-guided navigation and surgical robotics are the next frontiers of minimally invasive surgery. Assuring safety in high-stakes clinical environments is critical for their deployment. 2D/3D registration is an essential, enabling algorithm for most of these systems, as it provides spatial alignment of preoperative data with intraoperative images. While these algorithms have been studied widely, there is a need for verification methods to enable human stakeholders to assess and either approve or reject registration results to ensure safe operation.
To address the verification problem from the perspective of human perception, we develop novel visualization paradigms and use a sampling method based on approximate posterior distribution to simulate registration offsets. We then conduct a user study with 22 participants to investigate how different visualization paradigms (Neutral, Attention-Guiding, Correspondence-Suggesting) affect human performance in evaluating the simulated 2D/3D registration results using 12 pelvic fluoroscopy images.
All three visualization paradigms allow users to perform better than random guessing to differentiate between offsets of varying magnitude. The novel paradigms show better performance than the neutral paradigm when using an absolute threshold to differentiate acceptable and unacceptable registrations (highest accuracy: Correspondence-Suggesting (65.1%), highest F1 score: Attention-Guiding (65.7%)), as well as when using a paradigm-specific threshold for the same discrimination (highest accuracy: Attention-Guiding (70.4%), highest F1 score: Corresponding-Suggesting (65.0%)).
This study demonstrates that visualization paradigms do affect the human-based assessment of 2D/3D registration errors. However, further exploration is needed to understand this effect better and develop more effective methods to assure accuracy. This research serves as a crucial step toward enhanced surgical autonomy and safety assurance in technology-assisted image-guided surgery.
图像引导导航和手术机器人是微创外科的下一个前沿领域。确保高风险临床环境中的安全性对于它们的部署至关重要。2D/3D 配准是这些系统中大多数的基本启用算法,因为它提供了术前数据与术中图像的空间对准。虽然这些算法已经得到了广泛的研究,但需要验证方法,以使利益相关者能够评估并批准或拒绝配准结果,以确保安全操作。
为了从人类感知的角度解决验证问题,我们开发了新的可视化范例,并使用基于近似后验分布的抽样方法来模拟配准偏移。然后,我们进行了一项包含 22 名参与者的用户研究,以调查不同的可视化范例(中立、注意力引导、对应提示)如何影响人类在使用 12 张骨盆荧光透视图像评估模拟的 2D/3D 配准结果方面的表现。
所有三种可视化范例都允许用户表现得比随机猜测更好,以区分不同大小的偏移量。当使用绝对阈值来区分可接受和不可接受的配准时,新的范例比中立范例表现更好(最高准确性:对应提示(65.1%),最高 F1 得分:注意力引导(65.7%)),以及当使用特定于范例的阈值进行相同的区分时(最高准确性:注意力引导(70.4%),最高 F1 得分:对应提示(65.0%))。
这项研究表明,可视化范例确实会影响基于人类的 2D/3D 配准误差评估。然而,需要进一步探索以更好地理解这种影响,并开发更有效的方法来确保准确性。这项研究是朝着增强技术辅助图像引导手术中的手术自主性和安全保障迈出的重要一步。