Department of Diagnostic Radiology, Singapore General Hospital, Singapore General Hospital, Block 2, Level 1 Outram Road, Singapore, 169608, Singapore.
Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore.
Skeletal Radiol. 2025 Jan;54(1):67-75. doi: 10.1007/s00256-024-04692-6. Epub 2024 May 21.
This study aims to explore the feasibility of employing convolutional neural networks for detecting and localizing implant cutouts on anteroposterior pelvic radiographs.
The research involves the development of two Deep Learning models. Initially, a model was created for image-level classification of implant cutouts using 40191 pelvic radiographs obtained from a single institution. The radiographs were partitioned into training, validation, and hold-out test datasets in a 6/2/2 ratio. Performance metrics including the area under the receiver operator characteristics curve (AUROC), sensitivity, and specificity were calculated using the test dataset. Additionally, a second object detection model was trained to localize implant cutouts within the same dataset. Bounding box visualizations were generated on images predicted as cutout-positive by the classification model in the test dataset, serving as an adjunct for assessing algorithm validity.
The classification model had an accuracy of 99.7%, sensitivity of 84.6%, specificity of 99.8%, AUROC of 0.998 (95% CI: 0.996, 0.999) and AUPRC of 0.774 (95% CI: 0.646, 0.880). From the pelvic radiographs predicted as cutout-positive, the object detection model could achieve 95.5% localization accuracy on true positive images, but falsely generated 14 results from the 15 false-positive predictions.
The classification model showed fair accuracy for detection of implant cutouts, while the object detection model effectively localized cutout. This serves as proof of concept of using a deep learning-based approach for classification and localization of implant cutouts from pelvic radiographs.
本研究旨在探讨卷积神经网络在检测和定位前后骨盆 X 光片上的植入物切迹中的可行性。
本研究涉及开发两个深度学习模型。首先,使用来自单一机构的 40191 张骨盆 X 光片创建了一个用于植入物切迹图像级分类的模型。将 X 光片按 6/2/2 的比例分为训练集、验证集和保留测试集。使用测试数据集计算性能指标,包括受试者工作特征曲线(AUROC)下的面积、敏感性和特异性。此外,还训练了第二个对象检测模型来定位同一数据集中的植入物切迹。在分类模型预测为切出阳性的测试数据集图像上生成边界框可视化,作为评估算法有效性的辅助手段。
分类模型的准确率为 99.7%,敏感性为 84.6%,特异性为 99.8%,AUROC 为 0.998(95%CI:0.996,0.999),AUPRC 为 0.774(95%CI:0.646,0.880)。从预测为切出阳性的骨盆 X 光片中,目标检测模型可以在真阳性图像上达到 95.5%的定位准确率,但在 15 个假阳性预测中错误地生成了 14 个结果。
分类模型对检测植入物切迹的准确性表现出良好的效果,而目标检测模型可以有效地定位切迹。这证明了使用基于深度学习的方法对骨盆 X 光片中的植入物切迹进行分类和定位的概念验证。