Squires Steven, Harkness Elaine, Gareth Evans Dafydd, Astley Susan M
University of Manchester, School of Health Sciences, Division of Imaging, Informatics and Data Sciences, Faculty of Biology, Medicine and Health, Manchester, United Kingdom.
University of Manchester, Manchester Academic Health Science Centre, School of Biological Sciences, Division of Evolution, Infection and Genomics, Faculty of Biology, Medicine and Health, Manchester, United Kingdom.
J Med Imaging (Bellingham). 2023 Mar;10(2):024502. doi: 10.1117/1.JMI.10.2.024502. Epub 2023 Apr 5.
Mammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist scores to make accurate and consistent predictions.
We used a dataset of almost 160,000 mammograms, each with two independent density scores made by expert medical practitioners. We used two pretrained deep networks and adapted them to produce feature vectors, which were then used for both linear and nonlinear regression to make density predictions. We also simulated an "optimal method," which allowed us to compare the quality of our results with a simulated upper bound on performance.
Our deep learning method produced estimates with a root mean squared error (RMSE) of . The model estimates of cancer risk perform at a similar level to human experts, within uncertainty bounds. We made comparisons between different model variants and demonstrated the high level of consistency of the model predictions. Our modeled "optimal method" produced image predictions with a RMSE of between 7.98 and 8.90 for cranial caudal images.
We demonstrated a deep learning framework based upon a transfer learning approach to make density estimates based on radiologists' visual scores. Our approach requires modest computational resources and has the potential to be trained with limited quantities of data.
乳腺钼靶密度是癌症最强的风险因素之一。放射科医生使用视觉模拟量表评估的密度已被证明比其他方法能提供更好的风险预测。我们的目的是使用深度学习构建自动化模型,并根据放射科医生的评分进行训练,以做出准确且一致的预测。
我们使用了一个包含近160,000张乳房X光照片的数据集,每张照片都有由专业医生给出的两个独立的密度评分。我们使用了两个预训练的深度网络,并对其进行调整以生成特征向量,然后将这些特征向量用于线性和非线性回归以进行密度预测。我们还模拟了一种“最优方法”,这使我们能够将结果质量与模拟的性能上限进行比较。
我们的深度学习方法产生的估计值的均方根误差(RMSE)为 。在不确定性范围内,癌症风险的模型估计与人类专家的表现水平相似。我们对不同的模型变体进行了比较,并证明了模型预测的高度一致性。我们模拟的“最优方法”对头尾位图像产生的图像预测的RMSE在7.98至8.90之间。
我们展示了一个基于迁移学习方法的深度学习框架,该框架可根据放射科医生的视觉评分进行密度估计。我们的方法需要适度的计算资源,并且有可能在有限的数据量上进行训练。