Suppr超能文献

SurgAI3.8K:腹腔镜下妇科器官的标注数据集及其在自动增强现实手术引导中的应用

SurgAI3.8K: A Labeled Dataset of Gynecologic Organs in Laparoscopy with Application to Automatic Augmented Reality Surgical Guidance.

机构信息

Surgical Oncology Department, Centre Jean Perrin (Dr. Zadeh), Clermont-Ferrand, France; EnCoV, Institut Pascal, UMR CNRS/Université Clermont-Auvergne (Drs. Zadeh, François, Canis, Bourdel, Bartoli), Clermont-Ferrand, France.

EnCoV, Institut Pascal, UMR CNRS/Université Clermont-Auvergne (Drs. Zadeh, François, Canis, Bourdel, Bartoli), Clermont-Ferrand, France.

出版信息

J Minim Invasive Gynecol. 2023 May;30(5):397-405. doi: 10.1016/j.jmig.2023.01.012. Epub 2023 Jan 28.

Abstract

STUDY OBJECTIVE

We focus on explaining the concepts underlying artificial intelligence (AI), using Uteraug, a laparoscopic surgery guidance application based on Augmented Reality (AR), to provide concrete examples. AI can be used to automatically interpret the surgical images. We are specifically interested in the tasks of uterus segmentation and uterus contouring in laparoscopic images. A major difficulty with AI methods is their requirement for a massive amount of annotated data. We propose SurgAI3.8K, the first gynaecological dataset with annotated anatomy. We study the impact of AI on automating key steps of Uteraug.

DESIGN

We constructed the SurgAI3.8K dataset with 3800 images extracted from 79 laparoscopy videos. We created the following annotations: the uterus segmentation, the uterus contours and the regions of the left and right fallopian tube junctions. We divided our dataset into a training and a test dataset. Our engineers trained a neural network from the training dataset. We then investigated the performance of the neural network compared to the experts on the test dataset. In particular, we established the relationship between the size of the training dataset and the performance, by creating size-performance graphs.

SETTING

University.

PATIENTS

Not available.

INTERVENTION

Not available.

MEASUREMENTS AND MAIN RESULTS

The size-performance graphs show a performance plateau at 700 images for uterus segmentation and 2000 images for uterus contouring. The final segmentation scores on the training and test dataset were 94.6% and 84.9% (the higher, the better) and the final contour error were 19.5% and 47.3% (the lower, the better). These results allowed us to bootstrap Uteraug, achieving AR performance equivalent to its current manual setup.

CONCLUSION

We describe a concrete AI system in laparoscopic surgery with all steps from data collection, data annotation, neural network training, performance evaluation, to final application.

摘要

研究目的

我们专注于解释人工智能(AI)的基本概念,并以基于增强现实(AR)的腹腔镜手术指导应用程序 Uteraug 为例进行具体说明。AI 可用于自动解释手术图像。我们特别关注腹腔镜图像中的子宫分割和子宫轮廓任务。AI 方法的一个主要难点是它们需要大量标注数据。我们提出了 SurgAI3.8K,这是第一个具有标注解剖结构的妇科数据集。我们研究了 AI 对 Uteraug 自动化关键步骤的影响。

设计

我们使用从 79 个腹腔镜视频中提取的 3800 张图像构建了 SurgAI3.8K 数据集。我们创建了以下标注:子宫分割、子宫轮廓以及左右输卵管交界处的区域。我们将数据集分为训练数据集和测试数据集。我们的工程师使用训练数据集训练了一个神经网络。然后,我们在测试数据集上研究了神经网络与专家的性能对比。特别是,我们通过创建大小-性能图,研究了训练数据集大小与性能之间的关系。

设置

大学。

患者

不可用。

干预措施

不可用。

测量和主要结果

大小-性能图显示,子宫分割的性能在 700 张图像时达到平台期,子宫轮廓的性能在 2000 张图像时达到平台期。在训练数据集和测试数据集上的最终分割得分分别为 94.6%和 84.9%(得分越高越好),最终轮廓误差分别为 19.5%和 47.3%(误差越低越好)。这些结果使我们能够引导 Uteraug 的启动,实现与当前手动设置相当的 AR 性能。

结论

我们描述了腹腔镜手术中一个具体的 AI 系统,涵盖了从数据收集、数据标注、神经网络训练、性能评估到最终应用的所有步骤。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验