Laboratory of Biological Structures Mechanics, IRCCS Istituto Ortopedico Galeazzi, Via Galeazzi 4, 20161, Milan, Italy.
Institute of Orthopedic Research and Biomechanics, Center for Trauma Research Ulm, Ulm University, Ulm, Germany.
Eur Spine J. 2019 May;28(5):951-960. doi: 10.1007/s00586-019-05944-z. Epub 2019 Mar 12.
We present an automated method for extracting anatomical parameters from biplanar radiographs of the spine, which is able to deal with a wide scenario of conditions, including sagittal and coronal deformities, degenerative phenomena as well as images acquired with different fields of view.
The location of 78 landmarks (end plate centers, hip joint centers, and margins of the S1 end plate) was extracted from three-dimensional reconstructions of 493 spines of patients suffering from various disorders, including adolescent idiopathic scoliosis, adult deformities, and spinal stenosis. A fully convolutional neural network featuring an additional differentiable spatial to numerical (DSNT) layer was trained to predict the location of each landmark. The values of some parameters (T4-T12 kyphosis, L1-L5 lordosis, Cobb angle of scoliosis, pelvic incidence, sacral slope, and pelvic tilt) were then calculated based on the landmarks' locations. A quantitative comparison between the predicted parameters and the ground truth was performed on a set of 50 patients.
The spine shape predicted by the models was perceptually convincing in all cases. All predicted parameters were strongly correlated with the ground truth. However, the standard errors of the estimated parameters ranged from 2.7° (for the pelvic tilt) to 11.5° (for the L1-L5 lordosis).
The proposed method is able to automatically determine the spine shape in biplanar radiographs and calculate anatomical and posture parameters in a wide scenario of clinical conditions with a very good visual performance, despite limitations highlighted by the statistical analysis of the results. These slides can be retrieved under Electronic Supplementary Material.
我们提出了一种从脊柱的双平面 X 光片中提取解剖参数的自动化方法,该方法能够处理广泛的情况,包括矢状面和冠状面畸形、退行性现象以及使用不同视野获取的图像。
从患有各种疾病(包括青少年特发性脊柱侧凸、成人畸形和脊柱狭窄症)的 493 个患者的三维重建中提取了 78 个标志点(终板中心、髋关节中心和 S1 终板边缘)的位置。使用具有额外可区分空间到数值(DSNT)层的全卷积神经网络来预测每个标志点的位置。然后根据标志点的位置计算一些参数(T4-T12 后凸、L1-L5 前凸、脊柱侧凸的 Cobb 角、骨盆入射角、骶骨倾斜度和骨盆倾斜度)的值。在 50 名患者的一组中对预测参数和真实值进行了定量比较。
模型预测的脊柱形状在所有情况下都具有感知上的可信度。所有预测的参数都与真实值高度相关。然而,估计参数的标准误差范围从 2.7°(用于骨盆倾斜度)到 11.5°(用于 L1-L5 前凸)。
尽管结果的统计分析突出了一些局限性,但所提出的方法能够自动确定双平面 X 光片中的脊柱形状,并在广泛的临床情况下计算解剖和姿势参数,具有非常好的视觉性能。这些幻灯片可在电子补充材料中检索。