School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China.
School of Mechanical Engineering, Yanshan University, Qinhuangdao 066000, China.
Sensors (Basel). 2023 May 24;23(11):5037. doi: 10.3390/s23115037.
The counting of surgical instruments is an important task to ensure surgical safety and patient health. However, due to the uncertainty of manual operations, there is a risk of missing or miscounting instruments. Applying computer vision technology to the instrument counting process can not only improve efficiency, but also reduce medical disputes and promote the development of medical informatization. However, during the counting process, surgical instruments may be densely arranged or obstruct each other, and they may be affected by different lighting environments, all of which can affect the accuracy of instrument recognition. In addition, similar instruments may have only minor differences in appearance and shape, which increases the difficulty of identification. To address these issues, this paper improves the YOLOv7x object detection algorithm and applies it to the surgical instrument detection task. First, the RepLK Block module is introduced into the YOLOv7x backbone network, which can increase the effective receptive field and guide the network to learn more shape features. Second, the ODConv structure is introduced into the neck module of the network, which can significantly enhance the feature extraction ability of the basic convolution operation of the CNN and capture more rich contextual information. At the same time, we created the OSI26 data set, which contains 452 images and 26 surgical instruments, for model training and evaluation. The experimental results show that our improved algorithm exhibits higher accuracy and robustness in surgical instrument detection tasks, with F1, AP, AP50, and AP75 reaching 94.7%, 91.5%, 99.1%, and 98.2%, respectively, which are 4.6%, 3.1%, 3.6%, and 3.9% higher than the baseline. Compared to other mainstream object detection algorithms, our method has significant advantages. These results demonstrate that our method can more accurately identify surgical instruments, thereby improving surgical safety and patient health.
手术器械的清点是确保手术安全和患者健康的重要任务。然而,由于手动操作的不确定性,存在器械缺失或计数错误的风险。将计算机视觉技术应用于器械清点过程中,不仅可以提高效率,还可以减少医疗纠纷,促进医疗信息化的发展。然而,在清点过程中,手术器械可能会密集排列或相互遮挡,并且可能会受到不同的照明环境的影响,所有这些都会影响器械识别的准确性。此外,相似的器械可能在外观和形状上只有细微的差别,这增加了识别的难度。为了解决这些问题,本文对 YOLOv7x 目标检测算法进行了改进,并将其应用于手术器械检测任务中。首先,在 YOLOv7x 骨干网络中引入了 RepLK Block 模块,该模块可以增加有效的感受野,并引导网络学习更多的形状特征。其次,在网络的颈部模块中引入了 ODConv 结构,它可以显著增强 CNN 基本卷积操作的特征提取能力,并捕获更多丰富的上下文信息。同时,我们创建了 OSI26 数据集,其中包含 452 张图像和 26 种手术器械,用于模型训练和评估。实验结果表明,我们的改进算法在手术器械检测任务中表现出更高的准确性和鲁棒性,F1、AP、AP50 和 AP75 分别达到 94.7%、91.5%、99.1%和 98.2%,比基线分别提高了 4.6%、3.1%、3.6%和 3.9%。与其他主流目标检测算法相比,我们的方法具有显著的优势。这些结果表明,我们的方法可以更准确地识别手术器械,从而提高手术安全性和患者健康水平。