Department of Computer Science, Florida Polytechnic University, 4700 Research Way, Lakeland, FL, 33805, USA.
Department of Data Science and Business Analytics, Florida Polytechnic University, Lakeland, FL, USA.
Int J Comput Assist Radiol Surg. 2024 Apr;19(4):635-644. doi: 10.1007/s11548-023-03054-2. Epub 2024 Jan 11.
We have previously developed grading metrics to objectively measure endoscopist performance in endoscopic sleeve gastroplasty (ESG). One of our primary goals is to automate the process of measuring performance. To achieve this goal, the repeated task being performed (grasping or suturing) and the location of the endoscopic suturing device in the stomach (Incisura, Anterior Wall, Greater Curvature, or Posterior Wall) need to be accurately recorded.
For this study, we populated our dataset using screenshots and video clips from experts carrying out the ESG procedure on ex vivo porcine specimens. Data augmentation was used to enlarge our dataset, and synthetic minority oversampling (SMOTE) to balance it. We performed stomach localization for parts of the stomach and task classification using deep learning for images and computer vision for videos.
Classifying the stomach's location from the endoscope without SMOTE for images resulted in 89% and 84% testing and validation accuracy, respectively. For classifying the location of the stomach from the endoscope with SMOTE, the accuracies were 97% and 90% for images, while for videos, the accuracies were 99% and 98% for testing and validation, respectively. For task classification, the accuracies were 97% and 89% for images, while for videos, the accuracies were 100% for both testing and validation, respectively.
We classified the four different stomach parts manipulated during the ESG procedure with 97% training accuracy and classified two repeated tasks with 99% training accuracy with images. We also classified the four parts of the stomach with a 99% training accuracy and two repeated tasks with a 100% training accuracy with video frames. This work will be essential in automating feedback mechanisms for learners in ESG.
我们之前开发了评分指标来客观衡量内镜袖状胃切除术(ESG)中内镜医生的表现。我们的主要目标之一是实现测量过程的自动化。为了实现这一目标,需要准确记录内镜缝合器在胃内的位置(切迹、前壁、大弯或后壁)和正在执行的重复任务(抓取或缝合)。
在这项研究中,我们使用专家在离体猪标本上进行 ESG 手术的截图和视频片段来填充我们的数据集。数据扩充用于扩大我们的数据集,而合成少数过采样(SMOTE)用于平衡它。我们使用深度学习对图像进行胃定位和任务分类,使用计算机视觉对视频进行任务分类。
对图像未进行 SMOTE 的内镜胃定位分类,测试和验证准确率分别为 89%和 84%。对图像进行 SMOTE 的内镜胃定位分类,准确率分别为 97%和 90%,对视频进行分类,准确率分别为 99%和 98%。对于任务分类,图像的准确率分别为 97%和 89%,而对于视频,测试和验证的准确率均为 100%。
我们对 ESG 手术过程中操作的四个不同胃部部位进行了 97%的训练准确性分类,并对两个重复任务进行了 99%的训练准确性分类,使用的是图像。我们还使用视频帧对四个胃部部位进行了 99%的训练准确性分类和两个重复任务的 100%的训练准确性分类。这项工作对于 ESG 学习者的自动化反馈机制至关重要。