Suppr超能文献

基于卷积神经网络的多内镜手术场景下手术工具及工具头识别的应用与评估。

Application and evaluation of surgical tool and tool tip recognition based on Convolutional Neural Network in multiple endoscopic surgical scenarios.

机构信息

8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.

Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.

出版信息

Surg Endosc. 2023 Sep;37(9):7376-7384. doi: 10.1007/s00464-023-10323-3. Epub 2023 Aug 14.

Abstract

BACKGROUND

In recent years, computer-assisted intervention and robot-assisted surgery are receiving increasing attention. The need for real-time identification and tracking of surgical tools and tool tips is constantly demanding. A series of researches focusing on surgical tool tracking and identification have been performed. However, the size of dataset, the sensitivity/precision, and the response time of these studies were limited. In this work, we developed and utilized an automated method based on Convolutional Neural Network (CNN) and You Only Look Once (YOLO) v3 algorithm to locate and identify surgical tools and tool tips covering five different surgical scenarios.

MATERIALS AND METHODS

An algorithm of object detection was applied to identify and locate the surgical tools and tool tips. DarkNet-19 was used as Backbone Network and YOLOv3 was modified and applied for the detection. We included a series of 181 endoscopy videos covering 5 different surgical scenarios: pancreatic surgery, thyroid surgery, colon surgery, gastric surgery, and external scenes. A total amount of 25,333 images containing 94,463 targets were collected. Training and test sets were divided in a proportion of 2.5:1. The data sets were openly stored at the Kaggle database.

RESULTS

Under an Intersection over Union threshold of 0.5, the overall sensitivity and precision rate of the model were 93.02% and 89.61% for tool recognition and 87.05% and 83.57% for tool tip recognition, respectively. The model demonstrated the highest tool and tool tip recognition sensitivity and precision rate under external scenes. Among the four different internal surgical scenes, the network had better performances in pancreatic and colon surgeries and poorer performances in gastric and thyroid surgeries.

CONCLUSION

We developed a surgical tool and tool tip recognition model based on CNN and YOLOv3. Validation of our model demonstrated satisfactory precision, accuracy, and robustness across different surgical scenes.

摘要

背景

近年来,计算机辅助干预和机器人辅助手术受到越来越多的关注。对手术工具和工具尖端实时识别和跟踪的需求不断增加。已经进行了一系列专注于手术工具跟踪和识别的研究。然而,这些研究的数据量、灵敏度/精度和响应时间都受到限制。在这项工作中,我们开发并利用了一种基于卷积神经网络(CNN)和 You Only Look Once (YOLO) v3 算法的自动方法,用于定位和识别涵盖五个不同手术场景的手术工具和工具尖端。

材料和方法

应用目标检测算法来识别和定位手术工具和工具尖端。DarkNet-19 被用作骨干网络,对 YOLOv3 进行了修改并应用于检测。我们包括一系列涵盖五个不同手术场景的 181 个内窥镜视频:胰腺手术、甲状腺手术、结肠手术、胃手术和外部场景。共收集了包含 94463 个目标的 25333 张图像。训练集和测试集的比例为 2.5:1。数据集在 Kaggle 数据库中公开存储。

结果

在交并比阈值为 0.5 时,模型对工具识别的总体灵敏度和精度率分别为 93.02%和 89.61%,对工具尖端识别的总体灵敏度和精度率分别为 87.05%和 83.57%。该模型在外部场景下表现出最高的工具和工具尖端识别灵敏度和精度率。在四个不同的内部手术场景中,该网络在胰腺和结肠手术中表现较好,而在胃和甲状腺手术中表现较差。

结论

我们开发了一种基于 CNN 和 YOLOv3 的手术工具和工具尖端识别模型。我们的模型验证表明,该模型在不同手术场景下具有令人满意的精度、准确性和鲁棒性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验