Khojah Balsam, Enani Ghada, Saleem Abdulaziz, Malibary Nadim, Sabbagh Abdulrahman, Malibari Areej, Alhalabi Wadee
King Abdulaziz University, Jeddah, Saudi Arabia.
Surg Endosc. 2025 Apr 22. doi: 10.1007/s00464-025-11694-5.
Identifying the left ureter is a key step while performing laparoscopic sigmoid resection to prevent intraoperative injury and postoperative complications.
This feasibility study aims to evaluate the real-time performance of a deep learning-based computer vision model in identifying the left ureter during laparoscopic sigmoid resection. A deep learning model for ureteral identification was developed using a semantic segmentation algorithm trained from intraoperative images of ureteral dissection in videos depicted from laparoscopic sigmoid resection. We used 86 laparoscopic sigmoid resection recordings performed at King Abdulaziz University Hospital (KAUH), which were further processed with manual annotation. A total of 1237 images were extracted and annotated by three colorectal surgeons. Deep learning You Only Look Once (YOLO) versions 8 and 11 models were applied to the video recording of ureteral identification. Per-frame five-fold cross-validation was used to evaluate model performance.
Experiments showed high results with a mean Average Precision (mAP50) of 0.92 for the Intersection over Union (IoU) threshold greater than or equal to 0.5. The precision, recall, and Dice Coefficient (DC) evaluation metrics are 0.94, 0.88, and 0.90, respectively. The highest DC result is 0.95, achieved through the fourth-fold cross-validation. The stricter IoU threshold between 0.5 and 0.95 is represented by mAP50-95, which is 0.53. The model operated at a speed of 32 Frames Per Second (FPS), indicating it can work in real-time.
Deep learning YOLO 8 and 10 for semantic segmentation demonstrates accurate real-time identification of the left ureter in selected videos. A deep learning model could be used to project high-accuracy identification of real-time left ureter during laparoscopic sigmoidectomy using surgeons' expertise in intraoperative image navigation. Limitations included the sample size, lack of diversity in the methods of surgery, incomplete surgical processes, and lack of external validation.
在进行腹腔镜乙状结肠切除术时,识别左侧输尿管是预防术中损伤和术后并发症的关键步骤。
本可行性研究旨在评估基于深度学习的计算机视觉模型在腹腔镜乙状结肠切除术期间识别左侧输尿管的实时性能。使用从腹腔镜乙状结肠切除术视频中的输尿管解剖术中图像训练的语义分割算法,开发了一种用于输尿管识别的深度学习模型。我们使用了在阿卜杜勒阿齐兹国王大学医院(KAUH)进行的86例腹腔镜乙状结肠切除术记录,并通过人工标注进行了进一步处理。三位结直肠外科医生共提取并标注了1237张图像。将深度学习“你只看一次”(YOLO)版本8和11模型应用于输尿管识别的视频记录。使用逐帧五折交叉验证来评估模型性能。
实验结果显示,对于交并比(IoU)阈值大于或等于0.5的情况,平均平均精度(mAP50)为0.92。精确率、召回率和骰子系数(DC)评估指标分别为0.94、0.88和0.90。通过第四次交叉验证获得的最高DC结果为0.95。在0.5至0.95之间更严格的IoU阈值由mAP50 - 95表示,为0.53。该模型以每秒32帧(FPS)的速度运行,表明它可以实时工作。
用于语义分割的深度学习YOLO 8和10在选定视频中展示了对左侧输尿管的准确实时识别。利用外科医生在术中图像导航方面的专业知识,深度学习模型可用于在腹腔镜乙状结肠切除术中对实时左侧输尿管进行高精度识别。局限性包括样本量、手术方法缺乏多样性、手术过程不完整以及缺乏外部验证。