Tang Xiaowen, Zhu Yinsu
Department of Radiology, The Affiliated Cancer Hospital of Nanjing Medical University Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, 42 Baiziting, Nanjing, Jiangsu Province, 210009, People's Republic of China.
Biomed Phys Eng Express. 2025 Feb 25;11(2). doi: 10.1088/2057-1976/adb494.
Accurate identification of molecular subtypes in breast cancer is critical for personalized treatment. This study introduces a novel neural network model, RAE-Net, based on Multimodal Feature Fusion (MFF) and the Evidential Deep Learning Algorithm (EDLA) to improve breast cancer subtype prediction using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).A dataset of 344 patients with histologically confirmed breast cancer was divided into training (n = 200), validation (n = 60), and testing (n = 62) cohorts. RAE-Net, built on ResNet-50 with Multi-Head Attention (MHA) fusion and Multi-Layer Perceptron (MLP) mechanisms, combines radiomic and deep learning features for subtype prediction. The EDLA module adds uncertainty estimation to enhance classification reliability.The RAE-Net model incorporating the MFF module demonstrated superior performance, achieving a mean accuracy of 0.83 and a Macro-F1 score of 0.78, surpassing traditional radiomics models (accuracy: 0.79, Macro-F1: 0.75) and standalone deep learning models (accuracy: 0.80, Macro-F1: 0.76). When an EDLA uncertainty threshold of 0.2 was applied, the performance significantly improved, with accuracy reaching 0.97 and Macro-F1 increasing to 0.92. Additionally, RAE-Net outperformed two recent deep learning networks, ResGANet and HIFUSE. Specifically, RAE-Net showed a 0.5% improvement in accuracy and a higher AUC compared to ResGANet. In comparison to HIFUSE, RAE-Net reduced both the number of parameters and computational cost by 90% while only increasing computation time by 5.7%.RAE-Net integrates feature fusion and uncertainty estimation to predict breast cancer subtypes from DCE-MRI. The model achieves high accuracy while maintaining computational efficiency, demonstrating its potential for clinical use as a reliable and resource-efficient diagnostic tool.
准确识别乳腺癌的分子亚型对于个性化治疗至关重要。本研究引入了一种基于多模态特征融合(MFF)和证据深度学习算法(EDLA)的新型神经网络模型RAE-Net,以利用动态对比增强磁共振成像(DCE-MRI)改善乳腺癌亚型预测。一个包含344例经组织学确诊的乳腺癌患者的数据集被分为训练组(n = 200)、验证组(n = 60)和测试组(n = 62)。基于带有多头注意力(MHA)融合和多层感知器(MLP)机制的ResNet-50构建的RAE-Net,结合了影像组学和深度学习特征进行亚型预测。EDLA模块增加了不确定性估计以提高分类可靠性。纳入MFF模块的RAE-Net模型表现出卓越的性能,平均准确率达到0.83,宏F1分数为0.78,超过了传统影像组学模型(准确率:0.79,宏F1:0.75)和独立深度学习模型(准确率:0.80,宏F1:0.76)。当应用0.2的EDLA不确定性阈值时,性能显著提高,准确率达到0.97,宏F1增加到0.92。此外,RAE-Net优于最近的两个深度学习网络ResGANet和HIFUSE。具体而言,与ResGANet相比,RAE-Net的准确率提高了0.5%,AUC更高。与HIFUSE相比,RAE-Net在将参数数量和计算成本减少90%的同时,仅将计算时间增加了5.7%。RAE-Net整合特征融合和不确定性估计,从DCE-MRI预测乳腺癌亚型。该模型在保持计算效率的同时实现了高精度,证明了其作为可靠且资源高效的诊断工具在临床应用中的潜力。