Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan; Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan; Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan.
Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa-City, Chiba, 277-8577, Japan.
Int J Surg. 2022 Sep;105:106856. doi: 10.1016/j.ijsu.2022.106856. Epub 2022 Aug 27.
To perform accurate laparoscopic hepatectomy (LH) without injury, novel intraoperative systems of computer-assisted surgery (CAS) for LH are expected. Automated surgical workflow identification is a key component for developing CAS systems. This study aimed to develop a deep-learning model for automated surgical step identification in LH.
We constructed a dataset comprising 40 cases of pure LH videos; 30 and 10 cases were used for the training and testing datasets, respectively. Each video was divided into 30 frames per second as static images. LH was divided into nine surgical steps (Steps 0-8), and each frame was annotated as being within one of these steps in the training set. After extracorporeal actions (Step 0) were excluded from the video, two deep-learning models of automated surgical step identification for 8-step and 6-step models were developed using a convolutional neural network (Models 1 & 2). Each frame in the testing dataset was classified using the constructed model performed in real-time.
Above 8 million frames were annotated for surgical step identification from the pure LH videos. The overall accuracy of Model 1 was 0.891, which was increased to 0.947 in Model 2. Median and average accuracy for each case in Model 2 was 0.927 (range, 0.884-0.997) and 0.937 ± 0.04 (standardized difference), respectively. Real-time automated surgical step identification was performed at 21 frames per second.
We developed a highly accurate deep-learning model for surgical step identification in pure LH. Our model could be applied to intraoperative systems of CAS.
为了实现精准的腹腔镜肝切除术(LH)而不造成损伤,人们期望开发出用于 LH 的新型计算机辅助手术(CAS)术中系统。自动化手术流程识别是开发 CAS 系统的关键组成部分。本研究旨在开发一种用于 LH 中自动化手术步骤识别的深度学习模型。
我们构建了一个包含 40 例纯 LH 视频的数据集;其中 30 例用于训练数据集,10 例用于测试数据集。每个视频每秒被分为 30 帧的静态图像。LH 被分为 9 个手术步骤(步骤 0-8),在训练集中,每个帧都被标注为属于这些步骤之一。从视频中排除体外操作(步骤 0)后,使用卷积神经网络(模型 1 和模型 2)分别开发了用于 8 步和 6 步模型的自动化手术步骤识别的两个深度学习模型。使用构建的模型实时对测试数据集中的每个帧进行分类。
从纯 LH 视频中为手术步骤识别标注了超过 800 万帧。模型 1 的总体准确率为 0.891,在模型 2 中提高到 0.947。模型 2 中每个病例的中位数和平均准确率分别为 0.927(范围 0.884-0.997)和 0.937±0.04(标准差)。实时自动化手术步骤识别的速度为 21 帧/秒。
我们开发了一种用于纯 LH 中手术步骤识别的高度准确的深度学习模型。我们的模型可以应用于术中 CAS 系统。