Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.
Mayo Clinic Alix School of Medicine, Mayo Clinic, Rochester, Minnesota.
J Arthroplasty. 2023 Oct;38(10):2024-2031.e1. doi: 10.1016/j.arth.2023.05.036. Epub 2023 May 24.
Automatic methods for labeling and segmenting pelvis structures can improve the efficiency of clinical and research workflows and reduce the variability introduced with manual labeling. The purpose of this study was to develop a single deep learning model to annotate certain anatomical structures and landmarks on antero-posterior (AP) pelvis radiographs.
A total of 1,100 AP pelvis radiographs were manually annotated by 3 reviewers. These images included a mix of preoperative and postoperative images as well as a mix of AP pelvis and hip images. A convolutional neural network was trained to segment 22 different structures (7 points, 6 lines, and 9 shapes). Dice score, which measures overlap between model output and ground truth, was calculated for the shapes and lines structures. Euclidean distance error was calculated for point structures.
Dice score averaged across all images in the test set was 0.88 and 0.80 for the shape and line structures, respectively. For the 7-point structures, average distance between real and automated annotations ranged from 1.9 mm to 5.6 mm, with all averages falling below 3.1 mm except for the structure labeling the center of the sacrococcygeal junction, where performance was low for both human and machine-produced labels. Blinded qualitative evaluation of human and machine produced segmentations did not reveal any drastic decrease in performance of the automatic method.
We present a deep learning model for automated annotation of pelvis radiographs that flexibly handles a variety of views, contrasts, and operative statuses for 22 structures and landmarks.
自动标记和分割骨盆结构的方法可以提高临床和研究工作流程的效率,并减少手动标记带来的可变性。本研究的目的是开发一种单一的深度学习模型,用于对前后位(AP)骨盆 X 光片上的某些解剖结构和标志进行注释。
总共由 3 位审阅者对 1100 张 AP 骨盆 X 光片进行了手动注释。这些图像包括术前和术后图像的混合,以及 AP 骨盆和髋关节图像的混合。训练了一个卷积神经网络来分割 22 种不同的结构(7 个点、6 条线和 9 个形状)。计算了形状和线结构的重叠度量 Dice 得分。对点结构计算欧几里得距离误差。
在测试集中,所有图像的 Dice 得分分别为 0.88 和 0.80,用于形状和线结构。对于 7 个点结构,真实和自动注释之间的平均距离范围从 1.9 毫米到 5.6 毫米,除了标注骶尾关节中心的结构外,所有平均值都低于 3.1 毫米,而人类和机器产生的标签在这方面的性能都很低。对人类和机器产生的分割进行盲法定性评估并未显示自动方法的性能出现任何急剧下降。
我们提出了一种用于自动注释骨盆 X 光片的深度学习模型,该模型灵活地处理了 22 种结构和标志的各种视图、对比度和手术状态。