Ionescu Georgia V, Fergie Martin, Berks Michael, Harkness Elaine F, Hulleman Johan, Brentnall Adam R, Cuzick Jack, Evans D Gareth, Astley Susan M
University of Manchester, School of Computer Science, Manchester, United Kingdom.
University of Manchester, Division of Informatics, Imaging and Data Sciences, Faculty of Biology, Medicine and Health, Manchester, United Kingdom.
J Med Imaging (Bellingham). 2019 Jul;6(3):031405. doi: 10.1117/1.JMI.6.3.031405. Epub 2019 Jan 31.
Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by hand-crafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, ). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.
乳腺钼靶密度是乳腺癌的一个重要风险因素。在最近的研究中,使用视觉模拟量表(VAS)进行视觉评估的百分比密度显示出比现有的自动密度测量方法更强的风险预测能力,这表明读者可能识别出手工算法尚未捕捉到的相关图像特征。借助深度学习,有可能将这些知识封装到一种自动方法中。我们构建了卷积神经网络(CNN),用于从全视野数字乳腺钼靶图像预测密度VAS评分。这些CNN使用全乳腺图像进行训练,每个图像都标记有两名独立读者的平均VAS评分。每个CNN学习乳腺钼靶外观与VAS评分之间的映射,以便在测试时能够预测未见过图像的VAS评分。使用来自16968名女性的67520张乳腺钼靶图像对网络进行训练,并且为了进行模型选择,我们使用了一个包含73128张图像的数据集。两组病例对照,一组是筛查出癌症的对侧乳腺钼靶图像,另一组是随后检测出癌症的女性的先前图像,并在年龄、绝经状态、生育情况、激素替代疗法(HRT)和体重指数(BMI)方面与对照组进行匹配,用于评估乳腺癌预测的性能。在病例对照组中,对于筛查出的癌症,密度百分比最高五分位数与最低五分位数相比,癌症的比值比为2.49(95%置信区间:1.59至3.96);对于先前的图像,比值比为4.16(2.53至6.82),匹配一致性指数分别为0.587(0.542至0.627)和0.616(0.578至0.655)。对于先前的测试集,读者VAS与预测的VAS之间没有显著差异(似然比卡方检验, )。我们的全自动方法在癌症风险预测方面显示出有前景的结果,并且与人类表现相当。