Funke Isabel, Rivoir Dominik, Krell Stefanie, Speidel Stefanie
IEEE Trans Biomed Eng. 2025 Jul;72(7):2105-2119. doi: 10.1109/TBME.2025.3535228.
To enable context-aware computer assistance in the operating room of the future, cognitive systems need to understand automatically which surgical phase is being performed by the medical team. The primary source of information for surgical phase recognition is typically video, which presents two challenges: extracting meaningful features from the video stream and effectively modeling temporal information in the sequence of visual features.
For temporal modeling, attention mechanisms have gained popularity due to their ability to capture long-range dependencies. In this paper, we explore design choices for attention in existing temporal models for surgical phase recognition and propose a novel approach that uses attention more effectively and does not require hand-crafted constraints: TUNeS, an efficient and simple temporal model that incorporates self-attention at the core of a convolutional U-Net structure. In addition, we propose to train the feature extractor, a standard CNN, together with an LSTM on preferably long video segments, i.e., with long temporal context.
In our experiments, almost all temporal models performed better on top of feature extractors that were trained with longer temporal context. On these contextualized features, TUNeS achieves state-of-the-art results on the Cholec80 dataset.
This study offers new insights on how to use attention mechanisms to build accurate and efficient temporal models for surgical phase recognition.
Implementing automatic surgical phase recognition is essential to automate the analysis and optimization of surgical workflows and to enable context-aware computer assistance during surgery, thus ultimately improving patient care.
为了在未来的手术室中实现情境感知计算机辅助,认知系统需要自动理解医疗团队正在执行哪个手术阶段。手术阶段识别的主要信息来源通常是视频,这带来了两个挑战:从视频流中提取有意义的特征,以及有效地对视觉特征序列中的时间信息进行建模。
对于时间建模,注意力机制因其能够捕捉长距离依赖关系而受到欢迎。在本文中,我们探索了现有用于手术阶段识别的时间模型中注意力的设计选择,并提出了一种新颖的方法,该方法能更有效地使用注意力且不需要手工制作的约束:TUNeS,一种高效且简单的时间模型,它将自注意力融入卷积U-Net结构的核心。此外,我们建议将特征提取器(一个标准的卷积神经网络)与长短期记忆网络(LSTM)一起在尽可能长的视频片段上进行训练,即具有长的时间上下文。
在我们的实验中,几乎所有的时间模型在使用更长时间上下文训练的特征提取器之上表现得更好。在这些上下文特征上,TUNeS在Cholec80数据集上取得了领先的结果。
本研究为如何使用注意力机制构建用于手术阶段识别的准确高效的时间模型提供了新的见解。
实现自动手术阶段识别对于手术工作流程的分析和优化自动化以及在手术期间实现情境感知计算机辅助至关重要,从而最终改善患者护理。