• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对第一人称手术视频中体组织和手术工具对工作流程识别的影响进行分析。

An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.

机构信息

Graduate School of Science and Technology, Keio University, Yokohama, 2238522, Japan.

Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, 3058573, Japan.

出版信息

Int J Comput Assist Radiol Surg. 2024 Nov;19(11):2195-2202. doi: 10.1007/s11548-024-03074-6. Epub 2024 Feb 27.

DOI:10.1007/s11548-024-03074-6
PMID:38411780
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11541397/
Abstract

PURPOSE

Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.

METHODS

We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.

RESULTS

The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.

CONCLUSION

In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.

摘要

目的

通过考虑手术进展过程中的时间转换,对手术视野进行分析,有望帮助估计手术流程并评估外科医生的手术技能。本研究旨在通过采用机器学习技术来识别和区分手术视野中的元素,包括脂肪、肌肉和真皮等身体组织以及手术工具,从而提出一种手术流程的自动识别系统。

方法

我们对大约 908 张第一人称视角的乳房手术图像进行了注释,以方便分割。使用标注图像基于 Mask R-CNN 训练了一个像素级分类器。为了评估对手术流程识别的影响,我们又标注了 43007 张图像。然后,使用包含身体组织和手术工具掩模的手术图像对基于 Transformer 架构的网络进行了训练。

结果

分割阶段中对每个身体组织的实例分割提供了每个组织区域过渡趋势的见解。同时,有效地捕获了手术工具的空间特征。在手术流程识别的准确性方面,考虑身体组织的情况下,平均比基线提高了 3%。此外,与基线相比,包含手术工具的情况下,准确性又额外提高了 4%。

结论

在本研究中,我们揭示了身体组织的时间过渡和手术工具空间特征的时间过渡对识别第一人称视角手术视频中的手术流程的贡献。身体组织,尤其是在开放式手术中,可能是一个关键因素。本研究表明,通过准确识别每个手术流程步骤特有的手术工具,可以进一步提高识别的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/f7288d6a40dd/11548_2024_3074_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/ef9003dbfd19/11548_2024_3074_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/f1ec4ac8502b/11548_2024_3074_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/3c50136d25a9/11548_2024_3074_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/f7288d6a40dd/11548_2024_3074_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/ef9003dbfd19/11548_2024_3074_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/f1ec4ac8502b/11548_2024_3074_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/3c50136d25a9/11548_2024_3074_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9f73/11541397/f7288d6a40dd/11548_2024_3074_Fig4_HTML.jpg

相似文献

1
An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.对第一人称手术视频中体组织和手术工具对工作流程识别的影响进行分析。
Int J Comput Assist Radiol Surg. 2024 Nov;19(11):2195-2202. doi: 10.1007/s11548-024-03074-6. Epub 2024 Feb 27.
2
Surgical workflow recognition with temporal convolution and transformer for action segmentation.基于时间卷积和Transformer的手术流程识别用于动作分割
Int J Comput Assist Radiol Surg. 2023 Apr;18(4):785-794. doi: 10.1007/s11548-022-02811-z. Epub 2022 Dec 21.
3
Automated laparoscopic colorectal surgery workflow recognition using artificial intelligence: Experimental research.使用人工智能进行自动化腹腔镜结直肠手术工作流程识别:实验研究。
Int J Surg. 2020 Jul;79:88-94. doi: 10.1016/j.ijsu.2020.05.015. Epub 2020 May 12.
4
Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: development and preclinical evaluation (IDEAL stage 0).使用机器学习的内镜垂体手术自动化手术工作流程分析:开发与临床前评估(IDEAL 0期)
J Neurosurg. 2021 Nov 5;137(1):51-58. doi: 10.3171/2021.6.JNS21923. Print 2022 Jul 1.
5
A microdiscectomy surgical video annotation framework for supervised machine learning applications.用于监督机器学习应用的微创手术视频标注框架。
Int J Comput Assist Radiol Surg. 2024 Oct;19(10):1947-1952. doi: 10.1007/s11548-024-03203-1. Epub 2024 Jul 19.
6
Development and Validation of a Model for Laparoscopic Colorectal Surgical Instrument Recognition Using Convolutional Neural Network-Based Instance Segmentation and Videos of Laparoscopic Procedures.基于卷积神经网络实例分割和腹腔镜手术视频的腹腔镜结直肠手术器械识别模型的开发和验证。
JAMA Netw Open. 2022 Aug 1;5(8):e2226265. doi: 10.1001/jamanetworkopen.2022.26265.
7
LRTD: long-range temporal dependency based active learning for surgical workflow recognition.基于长程时间依赖的主动学习在手术流程识别中的应用
Int J Comput Assist Radiol Surg. 2020 Sep;15(9):1573-1584. doi: 10.1007/s11548-020-02198-9. Epub 2020 Jun 25.
8
Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos.用于深度学习辅助的白内障手术视频分析的 Cataract-1K 数据集。
Sci Data. 2024 Apr 12;11(1):373. doi: 10.1038/s41597-024-03193-4.
9
A methodology for the annotation of surgical videos for supervised machine learning applications.一种用于监督式机器学习应用的手术视频标注方法。
Int J Comput Assist Radiol Surg. 2023 Sep;18(9):1673-1678. doi: 10.1007/s11548-023-02923-0. Epub 2023 May 28.
10
MIcro-surgical anastomose workflow recognition challenge report.显微外科吻合术工作流程识别挑战赛报告。
Comput Methods Programs Biomed. 2021 Nov;212:106452. doi: 10.1016/j.cmpb.2021.106452. Epub 2021 Oct 10.

本文引用的文献

1
Video-based fully automatic assessment of open surgery suturing skills.基于视频的开放式手术缝合技能全自动评估。
Int J Comput Assist Radiol Surg. 2022 Mar;17(3):437-448. doi: 10.1007/s11548-022-02559-6. Epub 2022 Feb 1.
2
Evaluation of Deep Learning Models for Identifying Surgical Actions and Measuring Performance.深度学习模型在识别手术动作和测量性能方面的评估。
JAMA Netw Open. 2020 Mar 2;3(3):e201664. doi: 10.1001/jamanetworkopen.2020.1664.
3
Video Technologies for Recording Open Surgery: A Systematic Review.用于记录开放手术的视频技术:一项系统评价
Surg Innov. 2019 Oct;26(5):599-612. doi: 10.1177/1553350619853099. Epub 2019 Jun 5.
4
The Utilization of Video Technology in Surgical Education: A Systematic Review.视频技术在外科教学中的应用:系统评价。
J Surg Res. 2019 Mar;235:171-180. doi: 10.1016/j.jss.2018.09.015. Epub 2018 Oct 26.
5
"Deep-Onto" network for surgical workflow and context recognition.“Deep-Onto”网络用于手术流程和上下文识别。
Int J Comput Assist Radiol Surg. 2019 Apr;14(4):685-696. doi: 10.1007/s11548-018-1882-8. Epub 2018 Nov 16.
6
EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos.EndoNet:腹腔镜视频识别任务的深度架构。
IEEE Trans Med Imaging. 2017 Jan;36(1):86-97. doi: 10.1109/TMI.2016.2593957. Epub 2016 Jul 22.
7
Digital video recording in trauma surgery using commercially available equipment.使用市售设备进行创伤外科的数字视频记录。
Scand J Trauma Resusc Emerg Med. 2013 Apr 11;21:27. doi: 10.1186/1757-7241-21-27.
8
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.