Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
Eur Radiol. 2023 May;33(5):3243-3252. doi: 10.1007/s00330-023-09424-3. Epub 2023 Jan 27.
This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose.
We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP).
The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01).
The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility.
• Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.
本研究旨在通过 CT 定位器和深度神经网络来优化图像质量和辐射剂量,从而提高患者定位的准确性。
我们纳入了来自两个不同中心(C1 和 C2)的 5754 例胸部 CT 轴位和前后位(AP)图像。预处理后,图像被分为训练集(80%)和测试集(20%)。我们训练了一个深度神经网络,从 AP 定位器生成 3D 轴位图像。通过在预测图像上创建边界框来指示患者身体的几何中心线。通过深度学习模型估计的身体中心线与地面实况(BCAP)之间的距离与手动定位时患者的中心偏差(BCMP)进行比较。我们根据深度学习模型估计的肺中心线与地面实况(LCAP)之间的距离来评估我们模型的性能。
BCAP 的误差分别为 C1 为-0.75±7.73mm 和 C2 为 2.06±10.61mm。与 BCMP 相比,该误差明显更低,C1 和 C2 的 BCMP 误差分别为 9.35±14.94mm 和 13.98±14.5mm。绝对 BCAP 分别为 C1 为 5.7±5.26mm 和 C2 为 8.26±6.96mm。LCAP 指标分别为 C1 为 1.56±10.8mm 和 C2 为-0.27±16.29mm。对于较大的患者,BCAP 和 LCAP 的误差更大(p 值均<0.01)。
所提出方法的准确性与现有替代方法相当,具有避免与遮挡相机视野的物体相关的误差的优势。
• 前后方向(AP)的患者中心偏差是临床实践中的常见问题,会降低图像质量并增加患者的辐射剂量。• 我们提出了一种基于 CT 图像定位器的自动患者定位的深度神经网络,其性能可与替代技术(如外部 3D 视觉相机)相媲美。• 所提出方法的优势在于它避免了与遮挡相机视野的物体相关的误差,并且可以作为患者定位支持工具在成像控制台实现。