Mansour Mohammed, Cumak Eda Nur, Kutlu Mustafa, Mahmud Shekhar
Department of Mechatronics Engineering, Sakarya University of Applied Sciences, Sakarya, Turkey.
Department of Systems Engineering, Military Technological College, Muscat, Oman.
Surg Open Sci. 2023 Aug 6;15:1-11. doi: 10.1016/j.sopen.2023.07.023. eCollection 2023 Sep.
Surgical suturing is a fundamental skill that all medical and dental students learn during their education. Currently, the grading of students' suture skills in the medical faculty during general surgery training is relative, and students do not have the opportunity to learn specific techniques. Recent technological advances, however, have made it possible to classify and measure suture skills using artificial intelligence methods, such as Deep Learning (DL). This work aims to evaluate the success of surgical suture using DL techniques.
Six Convolutional Neural Network (CNN) models: VGG16, VGG19, Xception, Inception, MobileNet, and DensNet. We used a dataset of suture images containing two classes: successful and unsuccessful, and applied statistical metrics to compare the precision, recall, and F1 scores of the models.
The results showed that Xception had the highest accuracy at 95 %, followed by MobileNet at 91 %, DensNet at 90 %, Inception at 84 %, VGG16 at 73 %, and VGG19 at 61 %. We also developed a graphical user interface that allows users to evaluate suture images by uploading them or using the camera. The images are then interpreted by the DL models, and the results are displayed on the screen.
The initial findings suggest that the use of DL techniques can minimize errors due to inexperience and allow physicians to use their time more efficiently by digitizing the process.
手术缝合是所有医学和牙科学生在学习期间都要掌握的一项基本技能。目前,医学院学生在普通外科培训期间的缝合技能评分是相对的,而且学生没有机会学习特定技术。然而,最近的技术进步使得利用深度学习(DL)等人工智能方法对缝合技能进行分类和测量成为可能。这项工作旨在评估使用DL技术进行手术缝合的成效。
六个卷积神经网络(CNN)模型:VGG16、VGG19、Xception、Inception、MobileNet和DensNet。我们使用了一个包含成功和不成功两类的缝合图像数据集,并应用统计指标来比较各模型的精确率、召回率和F1分数。
结果显示,Xception的准确率最高,为95%,其次是MobileNet,为91%,DensNet为90%,Inception为84%,VGG16为73%,VGG19为61%。我们还开发了一个图形用户界面,允许用户通过上传或使用摄像头来评估缝合图像。然后由DL模型对图像进行解读,并将结果显示在屏幕上。
初步研究结果表明,使用DL技术可以将因经验不足导致的误差降至最低,并通过数字化流程让医生更高效地利用时间。