Thian Yee Liang, Li Yiting, Jagmohan Pooja, Sia David, Chan Vincent Ern Yao, Tan Robby T
Department of Diagnostic Imaging (Y.L.T., P.J., D.S., V.E.Y.C.) and Department of Electrical and Computer Engineering (Y.L., R.T.T.), National University of Singapore, 5 Lower Kent Ridge Rd, Singapore 119074; and Science Division, Yale-NUS College, Singapore (R.T.T.).
Radiol Artif Intell. 2019 Jan 30;1(1):e180001. doi: 10.1148/ryai.2019180001. eCollection 2019 Jan.
To demonstrate the feasibility and performance of an object detection convolutional neural network (CNN) for fracture detection and localization on wrist radiographs.
Institutional review board approval was obtained with waiver of consent for this retrospective study. A total of 7356 wrist radiographic studies were extracted from a hospital picture archiving and communication system. Radiologists annotated all radius and ulna fractures with bounding boxes. The dataset was split into training (90%) and validation (10%) sets and used to train fracture localization models for frontal and lateral images. Inception-ResNet Faster R-CNN architecture was implemented as a deep learning model. The models were tested on an unseen test set of 524 consecutive emergency department wrist radiographic studies with two radiologists in consensus as the reference standard. Per-fracture, per-image (ie, per-view), and per-study sensitivity and specificity were determined. Area under the receiver operating characteristic curve (AUC) analysis was performed.
The model detected and correctly localized 310 (91.2%) of 340 and 236 (96.3%) of 245 of all radius and ulna fractures on the frontal and lateral views, respectively. The per-image sensitivity, specificity, and AUC were 95.7% (95% confidence interval [CI]: 92.4%, 97.8%), 82.5% (95% CI: 77.4%, 86.8%), and 0.918 (95% CI: 0.894, 0.941), respectively, for the frontal view and 96.7% (95% CI: 93.6%, 98.6%), 86.4% (95% CI: 81.9%, 90.2%), and 0.933 (95% CI: 0.912, 0.954), respectively, for the lateral view. The per-study sensitivity, specificity, and AUC were 98.1% (95% CI: 95.6%, 99.4%), 72.9% (95% CI: 67.1%, 78.2%), and 0.895 (95% CI: 0.870, 0.920), respectively.
The ability of an object detection CNN to detect and localize radius and ulna fractures on wrist radiographs with high sensitivity and specificity was demonstrated.© RSNA, 2019.
验证用于腕部X光片骨折检测与定位的目标检测卷积神经网络(CNN)的可行性和性能。
本回顾性研究获得机构审查委员会批准,无需患者同意。从医院图像存档与通信系统中提取了7356份腕部X光检查报告。放射科医生用边界框标注了所有桡骨和尺骨骨折。数据集被分为训练集(90%)和验证集(10%),用于训练正位和侧位图像的骨折定位模型。采用Inception-ResNet Faster R-CNN架构作为深度学习模型。该模型在524份连续的急诊科腕部X光检查报告的未知测试集上进行测试,以两位达成共识的放射科医生的诊断作为参考标准。确定了每处骨折、每张图像(即每个视角)和每项研究的敏感度和特异度。进行了受试者操作特征曲线(AUC)下面积分析。
该模型在正位视图上检测并正确定位了340处桡骨和尺骨骨折中的310处(91.2%),在侧位视图上检测并正确定位了245处中的236处(96.3%)。正位视图的每张图像敏感度、特异度和AUC分别为95.7%(95%置信区间[CI]:92.4%,97.8%)、82.5%(95%CI:77.4%,86.8%)和0.918(95%CI:0.894,0.941),侧位视图分别为96.7%(95%CI:93.6%,98.6%)、86.4%(95%CI:81.9%,90.2%)和0.933(95%CI:0.912,0.954)。每项研究的敏感度、特异度和AUC分别为98.1%(95%CI:95.6%,99.4%)、72.9%(95%CI:67.1%,78.2%)和0.895(95%CI:0.870,0.920)。
证明了目标检测CNN能够以高敏感度和特异度检测和定位腕部X光片上的桡骨和尺骨骨折。©RSNA,2019年。