Suppr超能文献

手术阶段识别的自我知识蒸馏。

Self-knowledge distillation for surgical phase recognition.

机构信息

Medtronic Digital Surgery, 230 City Road, London, UK.

Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.

出版信息

Int J Comput Assist Radiol Surg. 2024 Jan;19(1):61-68. doi: 10.1007/s11548-023-02970-7. Epub 2023 Jun 20.

Abstract

PURPOSE

Advances in surgical phase recognition are generally led by training deeper networks. Rather than going further with a more complex solution, we believe that current models can be exploited better. We propose a self-knowledge distillation framework that can be integrated into current state-of-the-art (SOTA) models without requiring any extra complexity to the models or annotations.

METHODS

Knowledge distillation is a framework for network regularization where knowledge is distilled from a teacher network to a student network. In self-knowledge distillation, the student model becomes the teacher such that the network learns from itself. Most phase recognition models follow an encoder-decoder framework. Our framework utilizes self-knowledge distillation in both stages. The teacher model guides the training process of the student model to extract enhanced feature representations from the encoder and build a more robust temporal decoder to tackle the over-segmentation problem.

RESULTS

We validate our proposed framework on the public dataset Cholec80. Our framework is embedded on top of four popular SOTA approaches and consistently improves their performance. Specifically, our best GRU model boosts performance by [Formula: see text] accuracy and [Formula: see text] F1-score over the same baseline model.

CONCLUSION

We embed a self-knowledge distillation framework for the first time in the surgical phase recognition training pipeline. Experimental results demonstrate that our simple yet powerful framework can improve performance of existing phase recognition models. Moreover, our extensive experiments show that even with 75% of the training set we still achieve performance on par with the same baseline model trained on the full set.

摘要

目的

手术阶段识别的进展通常是通过训练更深层次的网络来推动的。我们认为,与其进一步使用更复杂的解决方案,不如更好地利用当前的模型。我们提出了一种自我知识蒸馏框架,可以集成到当前最先进的(SOTA)模型中,而不需要对模型或注释增加任何额外的复杂性。

方法

知识蒸馏是一种网络正则化框架,其中从教师网络到学生网络提取知识。在自我知识蒸馏中,学生模型成为教师,使网络可以从自身学习。大多数阶段识别模型都遵循编码器-解码器框架。我们的框架在两个阶段都利用自我知识蒸馏。教师模型指导学生模型的训练过程,从编码器中提取增强的特征表示,并构建更稳健的时间解码器来解决过度分割问题。

结果

我们在公共数据集 Cholec80 上验证了我们提出的框架。我们的框架嵌入在四个流行的 SOTA 方法之上,并一致地提高了它们的性能。具体来说,我们最好的 GRU 模型在相同的基线模型上提高了[Formula: see text]的准确性和[Formula: see text]的 F1 分数。

结论

我们首次将自我知识蒸馏框架嵌入到手术阶段识别的训练管道中。实验结果表明,我们简单而强大的框架可以提高现有阶段识别模型的性能。此外,我们广泛的实验表明,即使使用 75%的训练集,我们仍然可以达到与在完整数据集上训练的相同基线模型相当的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验