Kim Kwang Hyeon, Koo Hae-Won, Lee Byung-Jou
Clinical Research Support Center, Inje University Ilsan Paik Hospital, Goyang, Korea.
Department of Neurosurgery, Inje University Ilsan Paik Hospital, Inje University College of Medicine, Goyang, Korea.
Korean J Neurotrauma. 2024 Jun 17;20(2):90-100. doi: 10.13004/kjnt.2024.20.e17. eCollection 2024 Jun.
This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery.
We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs.
The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation.
Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.
本研究探讨基于深度学习的目标检测模型在脊柱固定手术器械术中精确定位和方向估计中的应用。
我们采用带有定向边界框(OBB)的You Only Look Once(YOLO)目标检测框架,以应对手术场景中非轴对齐器械的挑战。最初的100张图像数据集是使用11家用于脊柱融合手术的市售椎弓根螺钉制造商的宣传册和网站图像创建的,并使用数据增强技术将其扩展到300张图像。使用70%、20%和10%的腰椎椎弓根螺钉图像对模型进行训练、验证和测试,训练过程运行100个轮次。
模型测试结果表明,它可以通过定向边界框检测手术场景中椎弓根螺钉的位置及其方向角。在每个置信水平和mAP50时,模型的F1分数为0.86(精确率:1.00,召回率:0.80)。高精度表明模型有效地识别了真正的阳性器械检测,尽管召回率表明在捕获所有存在的器械方面存在轻微限制。对于目标方向至关重要的任务,这种方法比传统的边界框目标检测具有优势,我们的研究结果表明YOLOv8 OBB模型在诸如器械跟踪和手术导航等实际手术应用中的潜力。
未来的工作将探索纳入更多数据以及超参数优化的潜力,以提高整体模型性能。