Suppr超能文献

基于人工智能的视频分割:程序性步骤还是基本操作?

AI-Based Video Segmentation: Procedural Steps or Basic Maneuvers?

机构信息

Stanford School of Medicine, Department of Surgery, Stanford, California.

Stanford School of Medicine, Department of Surgery, Stanford, California.

出版信息

J Surg Res. 2023 Mar;283:500-506. doi: 10.1016/j.jss.2022.10.069. Epub 2022 Nov 24.

Abstract

INTRODUCTION

Video-based review of surgical procedures has proven to be useful in training by enabling efficiency in the qualitative assessment of surgical skill and intraoperative decision-making. Current video segmentation protocols focus largely on procedural steps. Although some operations are more complex than others, many of the steps in any given procedure involve an intricate choreography of basic maneuvers such as suturing, knot tying, and cutting. The use of these maneuvers at certain procedural steps can convey information that aids in the assessment of the complexity of the procedure, surgical preference, and skill. Our study aims to develop and evaluate an algorithm to identify these maneuvers.

METHODS

A standard deep learning architecture was used to differentiate between suture throws, knot ties, and suture cutting on a data set comprised of videos from practicing clinicians (N = 52) who participated in a simulated enterotomy repair. Perception of the added value to traditional artificial intelligence segmentation was explored by qualitatively examining the utility of identifying maneuvers in a subset of steps for an open colon resection.

RESULTS

An accuracy of 84% was reached in differentiating maneuvers. The precision in detecting the basic maneuvers was 87.9%, 60%, and 90.9% for suture throws, knot ties, and suture cutting, respectively. The qualitative concept mapping confirmed realistic scenarios that could benefit from basic maneuver identification.

CONCLUSIONS

Basic maneuvers can indicate error management activity or safety measures and allow for the assessment of skill. Our deep learning algorithm identified basic maneuvers with reasonable accuracy. Such models can aid in artificial intelligence-assisted video review by providing additional information that can complement traditional video segmentation protocols.

摘要

简介

基于视频的手术程序审查已被证明通过提高手术技能和术中决策的定性评估效率,在培训方面非常有用。当前的视频分割协议主要集中在程序步骤上。虽然有些手术比其他手术更复杂,但任何特定程序的许多步骤都涉及到缝合、打结和切割等基本动作的复杂编排。在某些程序步骤中使用这些动作可以传达有助于评估程序复杂性、手术偏好和技能的信息。我们的研究旨在开发和评估一种识别这些动作的算法。

方法

使用标准的深度学习架构,根据一组来自实际临床医生(N=52)的视频来区分缝线投掷、打结和缝线切割,这些医生参加了模拟肠切开修复。通过定性检查开放结肠切除术步骤中的某些动作的识别在传统人工智能分割中的附加价值来探索其感知。

结果

区分动作的准确率达到了 84%。在检测基本动作时,缝线投掷、打结和缝线切割的精确率分别为 87.9%、60%和 90.9%。概念映射的定性分析证实了可以从基本动作识别中受益的现实场景。

结论

基本动作可以指示错误管理活动或安全措施,并允许评估技能。我们的深度学习算法以合理的准确度识别了基本动作。这些模型可以通过提供可以补充传统视频分割协议的其他信息,来帮助人工智能辅助的视频审查。

相似文献

1
AI-Based Video Segmentation: Procedural Steps or Basic Maneuvers?
J Surg Res. 2023 Mar;283:500-506. doi: 10.1016/j.jss.2022.10.069. Epub 2022 Nov 24.
2
Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching.
Surg Endosc. 2023 Jan;37(1):402-411. doi: 10.1007/s00464-022-09509-y. Epub 2022 Aug 18.
3
Evaluation of Deep Learning Models for Identifying Surgical Actions and Measuring Performance.
JAMA Netw Open. 2020 Mar 2;3(3):e201664. doi: 10.1001/jamanetworkopen.2020.1664.
4
Video and accelerometer-based motion analysis for automated surgical skills assessment.
Int J Comput Assist Radiol Surg. 2018 Mar;13(3):443-455. doi: 10.1007/s11548-018-1704-z. Epub 2018 Jan 29.
5
Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying.
Surg Endosc. 2019 Nov;33(11):3732-3740. doi: 10.1007/s00464-019-06667-4. Epub 2019 Feb 21.
7
Video self-assessment of basic suturing and knot tying skills by novice trainees.
J Surg Educ. 2013 Mar-Apr;70(2):279-83. doi: 10.1016/j.jsurg.2012.10.003.
8
Intracorporal knot tying techniques - which is the right one?
J Pediatr Surg. 2017 Apr;52(4):633-638. doi: 10.1016/j.jpedsurg.2016.11.049. Epub 2016 Dec 20.
9
Analysis of the Structure of Surgical Activity for a Suturing and Knot-Tying Task.
PLoS One. 2016 Mar 7;11(3):e0149174. doi: 10.1371/journal.pone.0149174. eCollection 2016.

引用本文的文献

本文引用的文献

1
Open surgery tool classification and hand utilization using a multi-camera system.
Int J Comput Assist Radiol Surg. 2022 Aug;17(8):1497-1505. doi: 10.1007/s11548-022-02691-3. Epub 2022 Jun 27.
2
Situating Artificial Intelligence in Surgery: A Focus on Disease Severity.
Ann Surg. 2020 Sep 1;272(3):523-528. doi: 10.1097/SLA.0000000000004207.
3
Machine Learning for Surgical Phase Recognition: A Systematic Review.
Ann Surg. 2021 Apr 1;273(4):684-693. doi: 10.1097/SLA.0000000000004425.
4
Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery.
Surgery. 2021 May;169(5):1240-1244. doi: 10.1016/j.surg.2020.08.016. Epub 2020 Sep 26.
5
Surgical procedural map scoring for decision-making in laparoscopic cholecystectomy.
Am J Surg. 2019 Feb;217(2):356-361. doi: 10.1016/j.amjsurg.2018.11.011. Epub 2018 Nov 14.
8
Shortcut assessment: Can residents' operative performance be determined in the first five minutes of an operative task?
Surgery. 2018 Jun;163(6):1207-1212. doi: 10.1016/j.surg.2018.02.012. Epub 2018 May 1.
9
SV-RCNet: Workflow Recognition From Surgical Videos Using Recurrent Convolutional Network.
IEEE Trans Med Imaging. 2018 May;37(5):1114-1126. doi: 10.1109/TMI.2017.2787657.
10
Novel Uses of Video to Accelerate the Surgical Learning Curve.
J Laparoendosc Adv Surg Tech A. 2016 Apr;26(4):240-2. doi: 10.1089/lap.2016.0100. Epub 2016 Mar 31.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验