Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia.
Universiti Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia.
Med Phys. 2024 Oct;51(10):7191-7205. doi: 10.1002/mp.17349. Epub 2024 Aug 14.
Fluoroscopy guided interventions (FGIs) pose a risk of prolonged radiation exposure; personalized patient dosimetry is necessary to improve patient safety during these procedures. However, current FGIs systems do not capture the precise exposure regions of the patient, making it challenging to perform patient-procedure-specific dosimetry. Thus, there is a pressing need to develop approaches to extract and use this information to enable personalized radiation dosimetry for interventional procedures.
To propose a deep learning (DL) approach for the automatic localization of 3D anatomical landmarks on randomly collimated and magnified 2D head fluoroscopy images.
The model was developed with datasets comprising 800 000 pseudo 2D synthetic images (mixture of vessel-enhanced and non-enhancement), each with 55 annotated anatomical landmarks (two are landmarks for eye lenses), generated from 135 retrospectively collected head computed tomography (CT) volumetric data. Before training, dynamic random cropping was performed to mimic the varied field-size collimation in FGI procedures. Gaussian-distributed additive noise was applied to each individual image to enhance the robustness of the DL model in handling image degradation that may occur during clinical image acquisition in a clinical environment. The model was trained with 629 370 synthetic images for approximately 275 000 iterations and evaluated against a synthetic image test set and a clinical fluoroscopy test set.
The model shows good performance in estimating in- and out-of-image landmark positions and shows feasibility to instantiate the skull shape. The model successfully detected 96.4% and 92.5% 2D and 3D landmarks, respectively, within a 10 mm error on synthetic test images. It demonstrated an average of 3.6 ± 2.3 mm mean radial error and successfully detected 96.8% 2D landmarks within 10 mm error on clinical fluoroscopy images.
Our deep-learning model successfully localizes anatomical landmarks and estimates the gross shape of skull structures from collimated 2D projection views. This method may help identify the exposure region required for patient-specific organ dosimetry in FGIs procedures.
透视引导介入(FGI)会带来长时间辐射暴露的风险,因此需要对患者进行个体化剂量测定,以提高这些程序的患者安全性。然而,目前的 FGI 系统无法捕捉到患者的确切照射区域,因此很难进行特定于患者-程序的剂量测定。因此,迫切需要开发方法来提取和利用这些信息,以实现介入程序的个体化辐射剂量测定。
提出一种深度学习(DL)方法,用于自动定位随机准直和放大的二维头透视图像中的三维解剖学标志。
该模型使用包含 80 万张伪二维合成图像(血管增强和非增强的混合物)的数据集进行开发,这些图像来自 135 份回顾性采集的头部计算机断层扫描(CT)容积数据,每张图像都有 55 个标记的解剖学标志(两个是晶状体的标志)。在训练之前,通过动态随机裁剪来模拟 FGI 程序中变化的视野准直。对每个单独的图像应用高斯分布的加性噪声,以增强 DL 模型在处理临床图像采集过程中可能发生的图像退化的鲁棒性。该模型使用 629370 张合成图像进行训练,大约进行了 275000 次迭代,并在合成图像测试集和临床透视测试集上进行了评估。
该模型在估计图像内和图像外的标志位置方面表现出良好的性能,并且能够实例化颅骨形状。该模型在合成测试图像上的误差为 10mm 时,成功检测到 96.4%和 92.5%的 2D 和 3D 标志。它的平均径向误差为 3.6±2.3mm,并在临床透视图像上的误差为 10mm 时成功检测到 96.8%的 2D 标志。
我们的深度学习模型成功地从准直的二维投影视图定位解剖学标志并估计颅骨结构的大体形状。这种方法可能有助于在 FGI 程序中确定用于患者特定器官剂量测定的照射区域。