使用深度学习对儿科肌肉骨骼 X 光片进行自动语义标注。

Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning.

机构信息

The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.

Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.

出版信息

Pediatr Radiol. 2019 Jul;49(8):1066-1070. doi: 10.1007/s00247-019-04408-2. Epub 2019 Apr 30.

Abstract

BACKGROUND

An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose.

OBJECTIVE

To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area.

MATERIALS AND METHODS

We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances.

RESULTS

All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s.

CONCLUSION

DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.

摘要

背景

一种独立于元数据标签识别图像解剖区域的自动化方法可以改进放射科医生的工作流程(例如,自动化悬挂协议),并有助于为机器学习目的自动管理大型医学成像数据集。深度学习是实现这一目标的一种潜在工具。

目的

开发和测试深度卷积神经网络(DCNN)在自动分类儿科肌肉骨骼 X 光片的解剖区域方面的性能。

材料和方法

我们利用一个包含 250 张儿科骨骼 X 光片的数据库(肩部、肘部、手部、骨盆和膝盖各 50 张)来训练 5 个 DCNN,每个 DCNN 基于在 ImageNet 上预训练的 ResNet-18(迁移学习)来检测其他解剖区域。对于每个 DCNN,X 光片随机分为训练(64%)、验证(12%)和测试(24%)数据集。使用标准预处理方法将训练和验证数据集扩充 30 倍。我们还在来自单个机构的 100 张 X 光片的单独测试集中测试了我们的 DCNN。使用曲线下面积(AUC)的接收器工作特性(ROC)来评估 DCNN 的性能。

结果

所有五个用于将 X 光片分类为解剖区域的 DCNN 在两个测试集中的 ROC AUC 分别达到 1。对测试 X 光片的分类速度为每秒 33 张。

结论

通过标准处理技术对包含 30 倍扩充的小数据集进行训练的 DCNN 能够以超人的速度自动将儿科肌肉骨骼 X 光片分类到解剖区域,具有近乎完美的准确性。这一概念可能适用于其他身体部位和射线照相视图,并有可能创建一个全面的语义标记 DCNN。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索