基于三种成像模式数据集的乳腺癌诊断的深度特征融合方法。
A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets.
机构信息
Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA.
出版信息
Med Phys. 2017 Oct;44(10):5162-5171. doi: 10.1002/mp.12453. Epub 2017 Aug 12.
BACKGROUND
Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.
AIMS
We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.
MATERIALS & METHODS: We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).
RESULTS
From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).
DISCUSSION/CONCLUSION: We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.
背景
深度学习方法在放射组学/计算机辅助诊断(CADx)中经常受到小数据集、长计算时间和广泛图像预处理需求的限制。
目的
我们旨在开发一种乳腺 CADx 方法,通过利用预先训练的卷积神经网络(CNN)的效率并使用现有的手工制作的 CADx 特征来解决上述问题。
材料和方法
我们提出了一种使用预训练 CNN 提取和池化低到中级特征,并将其与使用传统 CADx 方法计算的手工制作的放射组学特征融合的方法。我们的方法在三种不同的临床成像模式(动态对比增强 MRI [690 例]、全数字乳腺摄影 [245 例]和超声 [1125 例])上进行了测试。
结果
从 ROC 分析来看,与以前的乳腺癌 CADx 方法相比,我们的基于融合的方法在三种成像模式下,在区分良恶性病变的任务中,在 AUC 方面都有统计学意义的提高。(DCE-MRI [AUC = 0.89(se = 0.01)],FFDM [AUC = 0.86(se = 0.01)],和超声 [AUC = 0.90(se = 0.01)])。
讨论/结论:我们提出了一种新的乳腺 CADx 方法,与现有方法相比,可以更有效地对乳腺病变进行特征描述。此外,我们提出的方法计算效率高,并且避免了图像预处理的需求。