Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria.
Eur Radiol Exp. 2023 Jun 19;7(1):30. doi: 10.1186/s41747-023-00344-x.
Artificial intelligence (AI)-powered, robot-assisted, and ultrasound (US)-guided interventional radiology has the potential to increase the efficacy and cost-efficiency of interventional procedures while improving postsurgical outcomes and reducing the burden for medical personnel.
To overcome the lack of available clinical data needed to train state-of-the-art AI models, we propose a novel approach for generating synthetic ultrasound data from real, clinical preoperative three-dimensional (3D) data of different imaging modalities. With the synthetic data, we trained a deep learning-based detection algorithm for the localization of needle tip and target anatomy in US images. We validated our models on real, in vitro US data.
The resulting models generalize well to unseen synthetic data and experimental in vitro data making the proposed approach a promising method to create AI-based models for applications of needle and target detection in minimally invasive US-guided procedures. Moreover, we show that by one-time calibration of the US and robot coordinate frames, our tracking algorithm can be used to accurately fine-position the robot in reach of the target based on 2D US images alone.
The proposed data generation approach is sufficient to bridge the simulation-to-real gap and has the potential to overcome data paucity challenges in interventional radiology. The proposed AI-based detection algorithm shows very promising results in terms of accuracy and frame rate.
This approach can facilitate the development of next-generation AI algorithms for patient anatomy detection and needle tracking in US and their application to robotics.
• AI-based methods show promise for needle and target detection in US-guided interventions. • Publicly available, annotated datasets for training AI models are limited. • Synthetic, clinical-like US data can be generated from magnetic resonance or computed tomography data. • Models trained with synthetic US data generalize well to real in vitro US data. • Target detection with an AI model can be used for fine positioning of the robot.
人工智能(AI)驱动、机器人辅助和超声(US)引导的介入放射学有可能提高介入手术的疗效和成本效益,同时改善术后结果并减轻医务人员的负担。
为了克服训练最先进的 AI 模型所需的可用临床数据的缺乏,我们提出了一种从不同成像模式的真实临床术前三维(3D)数据生成合成超声数据的新方法。使用合成数据,我们训练了一种基于深度学习的检测算法,用于在 US 图像中定位针尖和目标解剖结构。我们在真实的体外 US 数据上验证了我们的模型。
所得到的模型很好地泛化到看不见的合成数据和实验体外数据,使得所提出的方法成为创建基于 AI 的模型用于微创 US 引导程序中的针和目标检测的有前途的方法。此外,我们表明,通过一次校准 US 和机器人坐标系,我们的跟踪算法可以仅使用 2D US 图像来准确地将机器人精确定位于目标的可达范围内。
所提出的数据生成方法足以弥合模拟到真实的差距,并有可能克服介入放射学中的数据不足挑战。所提出的基于 AI 的检测算法在准确性和帧率方面显示出非常有前途的结果。
这种方法可以促进下一代 AI 算法的开发,用于检测 US 引导干预中的患者解剖结构和针跟踪,并将其应用于机器人技术。
AI 方法在 US 引导干预中的针和目标检测方面显示出前景。
用于训练 AI 模型的公共、带注释数据集有限。
可以从磁共振或计算机断层扫描数据生成具有临床相似性的合成 US 数据。
使用合成 US 数据训练的模型很好地泛化到真实的体外 US 数据。
可以使用 AI 模型进行目标检测来精确定位机器人。