Johns Hopkins University, Baltimore, MD, USA.
Johns Hopkins Hospital, Baltimore, MD, USA.
Int J Comput Assist Radiol Surg. 2022 May;17(5):921-927. doi: 10.1007/s11548-022-02602-6. Epub 2022 Mar 26.
Mixed reality (MR) for image-guided surgery may enable unobtrusive solutions for precision surgery. To display preoperative treatment plans at the correct physical position, it is essential to spatially align it with the patient intra-operatively. Accurate alignment is safety critical because it will guide treatment, but cannot always be achieved for varied reasons. Effective visualization mechanisms that reveal misalignment are crucial to prevent adverse surgical outcomes to ensure safe execution.
We test the effectiveness of three MR visualization paradigms in revealing spatial misalignment: wireframe, silhouette, and heatmap, which encodes residual registration error. We conduct a user study among 12 participants and use an anthropomorphic phantom mimicking total shoulder arthroplasty. Participants wearing Microsoft HoloLens 2 are presented with 36 randomly ordered spatial (mis)alignments of a virtual glenoid model overlaid on the phantom, each rendered using one of the three methods. Users choose whether to accept or reject the spatial alignment at every trial. Upon completion, participants report their perceived difficulty while using the visualization paradigms.
Across all visualization paradigms, the ability of participants to reliably judge the accuracy of spatial alignment was moderate (58.33%).The three visualization paradigms showed comparable performance. However, the heatmap-based visualization resulted in significantly better detectability than random chance ([Formula: see text]). Despite heatmap enabling the most accurate decisions according to our measurements, wireframe was the most liked paradigm (50 %), followed by silhouette (41.7 %) and heatmap (8.3 %).
Our findings suggest that conventional mixed reality visualization paradigms are not sufficiently effective in enabling users to differentiate between accurate and inaccurate spatial alignment of virtual content to the environment.
用于图像引导手术的混合现实(MR)技术可以为精准手术提供非侵入式的解决方案。为了将术前治疗计划以正确的物理位置显示出来,将其与术中患者进行空间配准至关重要。由于各种原因,精确的配准是安全关键的,因为它将指导治疗,但并非总能实现。有效的可视化机制可以揭示配准的偏差,对于防止不良的手术结果,确保安全的手术执行至关重要。
我们在 12 名参与者中测试了三种 MR 可视化范式在揭示空间配准偏差方面的有效性:线框、轮廓和热图,后者编码了剩余的配准误差。我们使用模仿全肩关节置换术的拟人化体模进行了用户研究。参与者佩戴 Microsoft HoloLens 2,系统会向他们呈现 36 个随机排列的虚拟关节盂模型与体模之间的空间(配准)偏差,每个偏差都使用三种方法中的一种进行渲染。在每次试验中,用户都需要选择是否接受或拒绝空间配准。完成后,参与者报告他们在使用可视化范式时的感知难度。
在所有的可视化范式中,参与者可靠判断空间配准准确性的能力为中等(58.33%)。三种可视化范式的性能相当。然而,基于热图的可视化方法的检测能力明显优于随机猜测([公式:见文本])。尽管根据我们的测量结果,热图可以实现最准确的决策,但线框是最受欢迎的范式(50%),其次是轮廓(41.7%)和热图(8.3%)。
我们的研究结果表明,传统的混合现实可视化范式在使用户能够区分虚拟内容与环境之间的准确和不准确的空间配准方面效果不佳。