Suppr超能文献

基于开源卷积神经网络平台的腹腔镜胃切除术视频图像中自动手术器械检测。

Automated Surgical Instrument Detection from Laparoscopic Gastrectomy Video Images Using an Open Source Convolutional Neural Network Platform.

机构信息

Division of Gastrointestinal Surgery, Department of Surgery, Kobe University Graduate School of Medicine, Kobe.

Division of Gastrointestinal Surgery, Department of Surgery, Kobe University Graduate School of Medicine, Kobe.

出版信息

J Am Coll Surg. 2020 May;230(5):725-732e1. doi: 10.1016/j.jamcollsurg.2020.01.037. Epub 2020 Mar 7.

Abstract

BACKGROUND

The common use of laparoscopic intervention produces impressive amounts of video data that are difficult to review for surgeons wishing to evaluate and improve their skills. Therefore, a need exists for the development of computer-based analysis of laparoscopic video to accelerate surgical training and assessment. We developed a surgical instrument detection system for video recordings of laparoscopic gastrectomy procedures. This system, the use of which might increase the efficiency of the video reviewing process, is based on the open source neural network platform, YOLOv3.

STUDY DESIGN

A total of 10,716 images extracted from 52 laparoscopic gastrectomy videos were included in the training and validation data sets. We performed 200,000 iterations of training. Video recordings of 10 laparoscopic gastrectomies, independent of the training and validation data set, were analyzed by our system, and heat maps visualizing trends of surgical instrument usage were drawn. Three skilled surgeons evaluated whether each heat map represented the features of the corresponding operation.

RESULTS

After training, the testing data set precision and sensitivity (recall) was 0.87 and 0.83, respectively. The heat maps perfectly represented the devices used during each operation. Without reviewing the video recordings, the surgeons accurately recognized the type of anastomosis, time taken to initiate duodenal and gastric dissection, and whether any irregular procedure was performed, from the heat maps (correct answer rates ≥ 90%).

CONCLUSIONS

A new automated system to detect manipulation of surgical instruments in video recordings of laparoscopic gastrectomies based on the open source neural network platform, YOLOv3, was developed and validated successfully.

摘要

背景

腹腔镜介入的广泛应用产生了大量难以让希望评估和提高技能的外科医生进行回顾的视频数据。因此,需要开发基于计算机的腹腔镜视频分析来加速手术培训和评估。我们开发了一种用于腹腔镜胃切除术视频记录的手术器械检测系统。该系统基于开源神经网络平台 YOLOv3,使用该系统可能会提高视频审查过程的效率。

研究设计

总共从 52 个腹腔镜胃切除术视频中提取了 10716 张图像,用于训练和验证数据集。我们进行了 20 万次的训练迭代。我们的系统分析了 10 个独立于训练和验证数据集的腹腔镜胃切除术视频记录,并绘制了可视化手术器械使用趋势的热图。三位熟练的外科医生评估了每个热图是否代表了相应手术的特征。

结果

经过训练,测试数据集的精度和灵敏度(召回率)分别为 0.87 和 0.83。热图完美地代表了每个手术中使用的设备。外科医生无需查看视频记录,仅通过热图就能准确识别吻合类型、开始十二指肠和胃解剖的时间,以及是否进行了任何不规则手术(正确回答率≥90%)。

结论

我们成功开发并验证了一种基于开源神经网络平台 YOLOv3 的新的自动系统,用于检测腹腔镜胃切除术视频记录中手术器械的操作。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验