Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
University of Chinese Academy of Sciences, Beijing, China.
Int J Comput Assist Radiol Surg. 2021 May;16(5):839-848. doi: 10.1007/s11548-021-02382-5. Epub 2021 May 5.
Automatic workflow recognition from surgical videos is fundamental and significant for developing context-aware systems in modern operating rooms. Although many approaches have been proposed to tackle challenges in this complex task, there are still many problems such as the fine-grained characteristics and spatial-temporal discrepancies in surgical videos.
We propose a contrastive learning-based convolutional recurrent network with multi-level prediction to tackle these problems. Specifically, split-attention blocks are employed to extract spatial features. Through a mapping function in the step-phase branch, the current workflow can be predicted on two mutual-boosting levels. Furthermore, a contrastive branch is introduced to learn the spatial-temporal features that eliminate irrelevant changes in the environment.
We evaluate our method on the Cataract-101 dataset. The results show that our method achieves an accuracy of 96.37% with only surgical step labels, which outperforms other state-of-the-art approaches.
The proposed convolutional recurrent network based on step-phase prediction and contrastive learning can leverage fine-grained characteristics and alleviate spatial-temporal discrepancies to improve the performance of surgical workflow recognition.
从外科手术视频中自动识别工作流程对于开发现代手术室中的上下文感知系统至关重要。尽管已经提出了许多方法来解决这一复杂任务中的挑战,但仍然存在许多问题,例如手术视频中的细粒度特征和时空差异。
我们提出了一种基于对比学习的卷积递归网络,具有多层次预测,以解决这些问题。具体来说,采用分割注意力块来提取空间特征。通过在步骤阶段分支中的映射函数,可以在两个相互增强的级别上预测当前的工作流程。此外,引入了一个对比分支来学习时空特征,以消除环境中的无关变化。
我们在 Cataract-101 数据集上评估了我们的方法。结果表明,我们的方法仅使用手术步骤标签即可达到 96.37%的准确率,优于其他最先进的方法。
基于步骤阶段预测和对比学习的提出的卷积递归网络可以利用细粒度特征并减轻时空差异,从而提高手术工作流程识别的性能。