Suppr超能文献

用于髋关节和骨盆 X 光片自动标记的深度学习工具。

A Deep Learning Tool for Automated Landmark Annotation on Hip and Pelvis Radiographs.

机构信息

Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.

Mayo Clinic Alix School of Medicine, Mayo Clinic, Rochester, Minnesota.

出版信息

J Arthroplasty. 2023 Oct;38(10):2024-2031.e1. doi: 10.1016/j.arth.2023.05.036. Epub 2023 May 24.

Abstract

BACKGROUND

Automatic methods for labeling and segmenting pelvis structures can improve the efficiency of clinical and research workflows and reduce the variability introduced with manual labeling. The purpose of this study was to develop a single deep learning model to annotate certain anatomical structures and landmarks on antero-posterior (AP) pelvis radiographs.

METHODS

A total of 1,100 AP pelvis radiographs were manually annotated by 3 reviewers. These images included a mix of preoperative and postoperative images as well as a mix of AP pelvis and hip images. A convolutional neural network was trained to segment 22 different structures (7 points, 6 lines, and 9 shapes). Dice score, which measures overlap between model output and ground truth, was calculated for the shapes and lines structures. Euclidean distance error was calculated for point structures.

RESULTS

Dice score averaged across all images in the test set was 0.88 and 0.80 for the shape and line structures, respectively. For the 7-point structures, average distance between real and automated annotations ranged from 1.9 mm to 5.6 mm, with all averages falling below 3.1 mm except for the structure labeling the center of the sacrococcygeal junction, where performance was low for both human and machine-produced labels. Blinded qualitative evaluation of human and machine produced segmentations did not reveal any drastic decrease in performance of the automatic method.

CONCLUSION

We present a deep learning model for automated annotation of pelvis radiographs that flexibly handles a variety of views, contrasts, and operative statuses for 22 structures and landmarks.

摘要

背景

自动标记和分割骨盆结构的方法可以提高临床和研究工作流程的效率,并减少手动标记带来的可变性。本研究的目的是开发一种单一的深度学习模型,用于对前后位(AP)骨盆 X 光片上的某些解剖结构和标志进行注释。

方法

总共由 3 位审阅者对 1100 张 AP 骨盆 X 光片进行了手动注释。这些图像包括术前和术后图像的混合,以及 AP 骨盆和髋关节图像的混合。训练了一个卷积神经网络来分割 22 种不同的结构(7 个点、6 条线和 9 个形状)。计算了形状和线结构的重叠度量 Dice 得分。对点结构计算欧几里得距离误差。

结果

在测试集中,所有图像的 Dice 得分分别为 0.88 和 0.80,用于形状和线结构。对于 7 个点结构,真实和自动注释之间的平均距离范围从 1.9 毫米到 5.6 毫米,除了标注骶尾关节中心的结构外,所有平均值都低于 3.1 毫米,而人类和机器产生的标签在这方面的性能都很低。对人类和机器产生的分割进行盲法定性评估并未显示自动方法的性能出现任何急剧下降。

结论

我们提出了一种用于自动注释骨盆 X 光片的深度学习模型,该模型灵活地处理了 22 种结构和标志的各种视图、对比度和手术状态。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验