LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France.
Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France.
Int J Comput Assist Radiol Surg. 2023 Sep;18(9):1697-1705. doi: 10.1007/s11548-023-02961-8. Epub 2023 Jun 7.
Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment.
We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores.
The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031).
Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
基于模拟的培训允许安全地学习手术技能。大多数基于虚拟现实的手术模拟器都解决了技术技能问题,而没有考虑非技术技能,例如注视使用。在这项研究中,我们研究了外科医生在提供视觉指导的基于虚拟现实的手术培训期间的视觉行为。我们的假设是,环境中的注视分布与模拟器的技术技能评估相关。
我们在关节镜模拟器上记录了 25 次手术培训课程。学员配备了头戴式眼动追踪设备。我们在两个会话上训练了一个 U-net 来分割三个特定于模拟器的感兴趣区域(AoI)和背景,以量化注视分布。我们测试了注视在这些区域的百分比是否与模拟器的分数相关。
神经网络能够分割所有的 AoI,每个区域的交并比均值均高于 94%。学员之间的注视百分比存在差异。尽管存在多种数据丢失的来源,但我们发现注视位置与模拟器分数之间存在显著相关性。例如,当学员的注视集中在虚拟助手时,他们获得了更好的手术程序分数(Spearman 相关测试,N=7,r=0.800,p=0.031)。
我们的发现表明,应该量化视觉行为,以评估基于模拟的培训环境中的手术专业技能,特别是在提供视觉指导的情况下。最终,视觉行为可以用于在 VR 模拟器上训练时定量评估外科医生的学习曲线和专业技能,这是对现有指标的补充。