Wang Ting, Wen Yingang, Wang Zhibiao, Li Xi
Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China.
National Engineering Research Center of Ultrasonic Medicine, Chongqing, China.
Front Artif Intell. 2025 Jul 23;8:1613960. doi: 10.3389/frai.2025.1613960. eCollection 2025.
To address the challenges of low surgical precision and poor consistency in focused ultrasound ablation surgery (FUAS) for uterine fibroids, which are often caused by variations in clinical experience and operator fatigue, this study aims to develop an intelligent three-dimensional (3D) visualization and navigation system by integrating magnetic resonance imaging (MRI) with real-time ultrasound (US) imaging, thereby improving the accuracy and efficiency of uterine fibroid surgery.
MRI and US images from 638 patients were annotated by experienced clinicians. The nnU-Net algorithm was used for preoperative segmentation and 3D reconstruction of MRI images to provide detailed visualization of fibroid morphology. The YOLACT model was applied to achieve rapid delineation of the uterus and key anatomical structures in real-time US images. To enhance the accuracy of lesion localization and navigation, the Iterative Closest Point (ICP) algorithm was employed for the registration of preoperative MRI with intraoperative US images.
Experimental results demonstrated that the system achieved a Dice Similarity Coefficient (DSC) exceeding 90% for the segmentation and identification of anatomical structures such as the uterus and fibroids. The YOLACT model achieved an accuracy greater than 95% in identifying key structures in real-time US images. In 90% of the cases, the system enabled efficient and precise tracking; however, approximately 5% of the cases required manual adjustment due to discrepancies between patient anatomy and preoperative MRI data. The proposed intelligent navigation system, based on MRI-US image fusion, offers an efficient and automated solution for FUAS in treating uterine fibroids, significantly improving surgical precision and operational efficiency. This system demonstrates strong clinical applicability. Future research will focus on enhancing the adaptability of the system, particularly in addressing challenges such as significant tissue deformation and occlusion, to improve its robustness and applicability in complex clinical scenarios.
聚焦超声消融手术(FUAS)治疗子宫肌瘤时,常因临床经验差异和操作者疲劳导致手术精度低、一致性差。本研究旨在通过整合磁共振成像(MRI)和实时超声(US)成像,开发一种智能三维(3D)可视化与导航系统,从而提高子宫肌瘤手术的准确性和效率。
638例患者的MRI和US图像由经验丰富的临床医生标注。使用nnU-Net算法对MRI图像进行术前分割和3D重建,以详细显示肌瘤形态。应用YOLACT模型在实时US图像中快速勾勒子宫和关键解剖结构。为提高病变定位和导航的准确性,采用迭代最近点(ICP)算法对术前MRI与术中US图像进行配准。
实验结果表明,该系统在分割和识别子宫及肌瘤等解剖结构时,骰子相似系数(DSC)超过90%。YOLACT模型在实时US图像中识别关键结构的准确率大于95%。在90%的病例中,该系统能够实现高效精确的跟踪;然而,约5%的病例因患者解剖结构与术前MRI数据存在差异而需要手动调整。所提出的基于MRI-US图像融合的智能导航系统为FUAS治疗子宫肌瘤提供了一种高效、自动化的解决方案,显著提高了手术精度和操作效率。该系统具有较强的临床适用性。未来研究将集中于提高系统的适应性,特别是在应对组织显著变形和遮挡等挑战方面,以增强其在复杂临床场景中的鲁棒性和适用性。