Osapoetra Laurentius Oscar, Moslemi Amir, Moore-Palhares Daniel, Halstead Schontal, Alberico David, Hwang Alexander, Sannachi Lakshmanan, Curpen Belinda, Czarnota Gregory J
Physical Sciences, Sunnybrook Research Institute, Toronto, Canada.
Department of Radiation Oncology, Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, Suite T2-167, Toronto, ON, M4N 3M5, Canada.
Sci Rep. 2025 Sep 25;15(1):32805. doi: 10.1038/s41598-025-15772-5.
QUS spectral parametric imaging offers a fast and accurate method for breast lesion characterization. This study explored using deep CNNs to classify breast lesions from QUS spectral parametric images, aiming to enhance radiomics and conventional machine learning. Predictive models were developed using transfer learning with pre-trained CNNs to distinguish malignant from benign lesions. The dataset included 276 participants: 184 malignant (median age, 51 years [IQR: 27-81 years]) and 92 benign cases (median age, 46 years [IQR: 18-75 years]). QUS spectral parametric imaging was applied to the US RF data and resulted in 1764 images of QUS spectral (MBF, SS, and SI), along with QUS scattering parameters (ASD and AAC). The data were randomly split into 60% training, 20% validation, and 20% test sets, stratified by lesion subtype, and repeated five times. The number of convolutional blocks was optimized, and the final convolutional layer was fine-tuned. Models tested included ResNet, Inception-v3, Xception, and EfficientNet. Xception-41 achieved a recall of 86 ± 3%, specificity of 87 ± 5%, balanced accuracy of 87 ± 3%, and an AUC of 0.93 ± 0.02 on test sets. EfficientNetV2-M showed similar performance with a recall of 91 ± 1%, specificity of 81 ± 7%, balanced accuracy of 86 ± 3%, and an AUC of 0.92 ± 0.02. CNN models outperformed radiomics and conventional machine learning (p-values < 0.05). This study demonstrated the capability of end-to-end CNN-based models for the accurate characterization of breast masses from QUS spectral parametric images.
定量超声光谱参数成像为乳腺病变特征分析提供了一种快速且准确的方法。本研究探索使用深度卷积神经网络(CNN)对定量超声光谱参数图像中的乳腺病变进行分类,旨在增强放射组学和传统机器学习。利用预训练的CNN通过迁移学习开发预测模型,以区分恶性病变和良性病变。数据集包括276名参与者:184例恶性病变(中位年龄51岁[四分位间距:27 - 81岁])和92例良性病例(中位年龄46岁[四分位间距:18 - 75岁])。将定量超声光谱参数成像应用于超声射频数据,得到1764张定量超声光谱(平均血流分数、标准差、标准化强度)图像以及定量超声散射参数(平均散射角、平均衰减系数)图像。数据被随机分为60%训练集、20%验证集和20%测试集,按病变亚型分层,并重复五次。对卷积块的数量进行了优化,对最终卷积层进行了微调。测试的模型包括ResNet、Inception - v3、Xception和EfficientNet。Xception - 41在测试集上的召回率为86±3%,特异性为87±5%,平衡准确率为87±3%,曲线下面积为0.93±0.02。EfficientNetV2 - M表现出相似的性能,召回率为91±1%,特异性为81±7%,平衡准确率为86±3%,曲线下面积为0.92±0.02。CNN模型优于放射组学和传统机器学习(p值<0.05)。本研究证明了基于端到端CNN的模型能够从定量超声光谱参数图像中准确表征乳腺肿块。