Chan Peter Y W, Baker Courtney E, Suh Yehyun, Moyer Daniel, Martin J Ryan
Department of Orthopaedic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee.
Department of Computer Science, Vanderbilt School of Engineering, Nashville, Tennessee.
J Arthroplasty. 2025 Aug;40(8):2092-2100. doi: 10.1016/j.arth.2025.01.032. Epub 2025 Jan 27.
Novel methods for annotating antero-posterior pelvis radiographs and fluoroscopic images with deep-learning models have recently been developed. However, their clinical use has been limited. Therefore, the purpose of this study was to develop a deep learning model that could annotate clinically relevant pelvic landmarks on both radiographic and fluoroscopic images and automate total hip arthroplasty (THA)-relevant measurements.
A deep learning model was developed using imaging from 161 primary THAs. A combination of preoperative and postoperative antero-posterior pelvis radiographs and intraoperative fluoroscopic images were annotated. A landmark detection model was then designed to annotate pelvis radiographs and fluoroscopic images. The algorithm was used to automate the measurement of pelvic tilt, leg length, offset, acetabular component abduction, and inclination.
Our novel deep learning model annotated pelvic landmarks as well, if not better, than trained humans at 16 of 20, four of four, and five of eight landmarks for bony landmarks on radiographs, implant landmarks on radiographs, and bony landmarks on fluoroscopy, respectively. Measurements of cup inclination and anteversion, pelvic tilt, offset, and leg length were successfully calculated.
We have developed a novel deep-learning model that can automate the identification of clinically relevant pelvic landmarks in real time and provide THA-relevant measurements that are equivalent to those of trained humans. We believe the model could be rapidly incorporated into clinical practice for both surgical and research applications.
最近已开发出利用深度学习模型对骨盆前后位X线片和透视图像进行标注的新方法。然而,它们在临床中的应用一直有限。因此,本研究的目的是开发一种深度学习模型,该模型能够在X线片和透视图像上标注与临床相关的骨盆标志点,并自动进行全髋关节置换术(THA)相关的测量。
利用161例初次全髋关节置换术的影像资料开发了一种深度学习模型。对术前和术后骨盆前后位X线片以及术中透视图像进行了标注。然后设计了一种标志点检测模型来标注骨盆X线片和透视图像。该算法用于自动测量骨盆倾斜度、腿长、偏心距、髋臼假体外展和倾斜度。
我们的新型深度学习模型在标注骨盆标志点方面,对于X线片上的骨性标志点、X线片上的植入物标志点以及透视检查中的骨性标志点,分别在20个标志点中的16个、4个标志点中的4个以及8个标志点中的5个上,即便不比训练有素的人员更好,也与之相当。成功计算了髋臼倾斜度和前倾角、骨盆倾斜度、偏心距和腿长。
我们开发了一种新型深度学习模型,它能够实时自动识别与临床相关的骨盆标志点,并提供与全髋关节置换术相关的测量结果,这些结果与训练有素的人员所获得的结果相当。我们相信该模型可迅速应用于临床实践中的手术和研究。