Kooi Thijs, Karssemeijer Nico
RadboudUMC Nijmegen, Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands.
J Med Imaging (Bellingham). 2017 Oct;4(4):044501. doi: 10.1117/1.JMI.4.4.044501. Epub 2017 Oct 10.
We investigate the addition of symmetry and temporal context information to a deep convolutional neural network (CNN) with the purpose of detecting malignant soft tissue lesions in mammography. We employ a simple linear mapping that takes the location of a mass candidate and maps it to either the contralateral or prior mammogram, and regions of interest (ROIs) are extracted around each location. Two different architectures are subsequently explored: (1) a fusion model employing two datastreams where both ROIs are fed to the network during training and testing and (2) a stagewise approach where a single ROI CNN is trained on the primary image and subsequently used as a feature extractor for both primary and contralateral or prior ROIs. A "shallow" gradient boosted tree classifier is then trained on the concatenation of these features and used to classify the joint representation. The baseline yielded an AUC of 0.87 with confidence interval [0.853, 0.893]. For the analysis of symmetrical differences, the first architecture where both primary and contralateral patches are presented during training obtained an AUC of 0.895 with confidence interval [0.877, 0.913], and the second architecture where a new classifier is retrained on the concatenation an AUC of 0.88 with confidence interval [0.859, 0.9]. We found a significant difference between the first architecture and the baseline at high specificity with [Formula: see text]. When using the same architectures to analyze temporal change, we yielded an AUC of 0.884 with confidence interval [0.865, 0.902] for the first architecture and an AUC of 0.879 with confidence interval [0.858, 0.898] in the second setting. Although improvements for temporal analysis were consistent, they were not found to be significant. The results show our proposed method is promising and we suspect performance can greatly be improved when more temporal data become available.
我们研究了将对称性和时间上下文信息添加到深度卷积神经网络(CNN)中,目的是在乳腺钼靶摄影中检测恶性软组织病变。我们采用一种简单的线性映射,获取肿块候选位置并将其映射到对侧或先前的乳腺钼靶图像,然后在每个位置周围提取感兴趣区域(ROI)。随后探索了两种不同的架构:(1)一种融合模型,采用两个数据流,在训练和测试期间将两个ROI都输入网络;(2)一种分阶段方法,在主图像上训练单个ROI CNN,随后将其用作主ROI和对侧或先前ROI的特征提取器。然后在这些特征的拼接结果上训练一个“浅层”梯度提升树分类器,并用于对联合表示进行分类。基线模型的AUC为0.87,置信区间为[0.853, 0.893]。对于对称差异分析,第一种架构(在训练期间同时呈现主图像和对侧图像块)的AUC为0.895,置信区间为[0.877, 0.913],第二种架构(在拼接结果上重新训练一个新分类器)的AUC为0.88,置信区间为[0.859, 0.9]。我们发现第一种架构与基线模型在高特异性时存在显著差异,[公式:见原文]。当使用相同架构分析时间变化时,第一种架构的AUC为0.884,置信区间为[0.865, 0.902],第二种架构的AUC为0.879,置信区间为[0.858, 0.898]。虽然时间分析的改进是一致的,但未发现具有显著性。结果表明我们提出的方法很有前景,并且我们推测当有更多时间数据可用时,性能可以大幅提高。