Suppr超能文献

人工智能在腹腔镜肝切除术中自动手术流程识别:实验研究。

Automated surgical workflow identification by artificial intelligence in laparoscopic hepatectomy: Experimental research.

机构信息

Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan; Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan; Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan.

Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan.

出版信息

Int J Surg. 2022 Sep;105:106856. doi: 10.1016/j.ijsu.2022.106856. Epub 2022 Aug 27.

Abstract

BACKGROUND

To perform accurate laparoscopic hepatectomy (LH) without injury, novel intraoperative systems of computer-assisted surgery (CAS) for LH are expected. Automated surgical workflow identification is a key component for developing CAS systems. This study aimed to develop a deep-learning model for automated surgical step identification in LH.

MATERIALS AND METHODS

We constructed a dataset comprising 40 cases of pure LH videos; 30 and 10 cases were used for the training and testing datasets, respectively. Each video was divided into 30 frames per second as static images. LH was divided into nine surgical steps (Steps 0-8), and each frame was annotated as being within one of these steps in the training set. After extracorporeal actions (Step 0) were excluded from the video, two deep-learning models of automated surgical step identification for 8-step and 6-step models were developed using a convolutional neural network (Models 1 & 2). Each frame in the testing dataset was classified using the constructed model performed in real-time.

RESULTS

Above 8 million frames were annotated for surgical step identification from the pure LH videos. The overall accuracy of Model 1 was 0.891, which was increased to 0.947 in Model 2. Median and average accuracy for each case in Model 2 was 0.927 (range, 0.884-0.997) and 0.937 ± 0.04 (standardized difference), respectively. Real-time automated surgical step identification was performed at 21 frames per second.

CONCLUSIONS

We developed a highly accurate deep-learning model for surgical step identification in pure LH. Our model could be applied to intraoperative systems of CAS.

摘要

背景

为了实现精准的腹腔镜肝切除术(LH)而不造成损伤,人们期望开发出用于 LH 的新型计算机辅助手术(CAS)术中系统。自动化手术流程识别是开发 CAS 系统的关键组成部分。本研究旨在开发一种用于 LH 中自动化手术步骤识别的深度学习模型。

材料与方法

我们构建了一个包含 40 例纯 LH 视频的数据集;其中 30 例用于训练数据集,10 例用于测试数据集。每个视频每秒被分为 30 帧的静态图像。LH 被分为 9 个手术步骤(步骤 0-8),在训练集中,每个帧都被标注为属于这些步骤之一。从视频中排除体外操作(步骤 0)后,使用卷积神经网络(模型 1 和模型 2)分别开发了用于 8 步和 6 步模型的自动化手术步骤识别的两个深度学习模型。使用构建的模型实时对测试数据集中的每个帧进行分类。

结果

从纯 LH 视频中为手术步骤识别标注了超过 800 万帧。模型 1 的总体准确率为 0.891,在模型 2 中提高到 0.947。模型 2 中每个病例的中位数和平均准确率分别为 0.927(范围 0.884-0.997)和 0.937±0.04(标准差)。实时自动化手术步骤识别的速度为 21 帧/秒。

结论

我们开发了一种用于纯 LH 中手术步骤识别的高度准确的深度学习模型。我们的模型可以应用于术中 CAS 系统。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验