Jeny Afsana Ahsan, Hamzehei Sahand, Jin Annie, Baker Stephen Andrew, Van Rathe Tucker, Bai Jun, Yang Clifford, Nabavi Sheida
School of Computing, University of Connecticut, Storrs, Connecticut, USA.
Department of Radiology, UConn Health, Farmington, Connecticut, USA.
Med Phys. 2025 May;52(5):2999-3014. doi: 10.1002/mp.17650. Epub 2025 Jan 30.
BACKGROUND: Breast cancer screening via mammography plays a crucial role in early detection, significantly impacting women's health outcomes worldwide. However, the manual analysis of mammographic images is time-consuming and requires specialized expertise, presenting substantial challenges in medical practice. PURPOSE: To address these challenges, we introduce a CNN-Transformer based model tailored for breast cancer classification through mammographic analysis. This model leverages both prior and current images to monitor temporal changes, aiming to enhance the efficiency and accuracy (ACC) of computer-aided diagnosis systems by mimicking the detailed examination process of radiologists. METHODS: In this study, our proposed model incorporates a novel integration of a position-wise feedforward network and multi-head self-attention, enabling it to detect abnormal or cancerous changes in mammograms over time. Additionally, the model employs positional encoding and channel attention methods to accurately highlight critical spatial features, thus precisely differentiating between normal and cancerous tissues. Our methodology utilizes focal loss (FL) to precisely address challenging instances that are difficult to classify, reducing false negatives and false positives to improve diagnostic ACC. RESULTS: We compared our model with eight baseline models; specifically, we utilized only current images for the single model ResNet50 while employing both prior and current images for the remaining models in terms of accuracy (ACC), sensitivity (SEN), precision (PRE), specificity (SPE), F1 score, and area under the curve (AUC). The results demonstrate that the proposed model outperforms the baseline models, achieving an ACC of 90.80%, SEN of 90.80%, PRE of 90.80%, SPE of 90.88%, an F1 score of 90.95%, and an AUC of 92.58%. The codes and related information are available at https://github.com/NabaviLab/PCTM. CONCLUSIONS: Our proposed CNN-Transformer model integrates both prior and current images, removes long-range dependencies, and enhances its capability for nuanced classification. The application of FL reduces false positive rate (FPR) and false negative rates (FNR), improving both SEN and SPE. Furthermore, the model achieves the lowest false discovery rate and FNR across various abnormalities, including masses, calcification, and architectural distortions (ADs). These low error rates highlight the model's reliability and underscore its potential to improve early breast cancer detection in clinical practice.
背景:通过乳房X线摄影进行乳腺癌筛查在早期检测中起着至关重要的作用,对全球女性的健康结果产生重大影响。然而,乳房X线图像的人工分析耗时且需要专业知识,在医疗实践中带来了巨大挑战。 目的:为应对这些挑战,我们引入了一种基于卷积神经网络(CNN)-Transformer的模型,通过乳房X线分析进行乳腺癌分类。该模型利用先前和当前的图像来监测时间变化,旨在通过模仿放射科医生的详细检查过程来提高计算机辅助诊断系统的效率和准确性(ACC)。 方法:在本研究中,我们提出的模型结合了位置-wise前馈网络和多头自注意力的新颖整合,使其能够随着时间检测乳房X线照片中的异常或癌变变化。此外,该模型采用位置编码和通道注意力方法来准确突出关键空间特征,从而精确区分正常组织和癌变组织。我们的方法利用焦点损失(FL)来精确处理难以分类的具有挑战性的实例,减少假阴性和假阳性以提高诊断ACC。 结果:我们将我们的模型与八个基线模型进行了比较;具体而言,对于单个模型ResNet50,我们仅使用当前图像,而对于其余模型,在准确性(ACC)、敏感性(SEN)、精确率(PRE)、特异性(SPE)、F1分数和曲线下面积(AUC)方面同时使用先前和当前图像。结果表明,所提出的模型优于基线模型,实现了90.80%的ACC、90.80%的SEN、90.80%的PRE、90.88%的SPE、90.95%的F1分数和92.58%的AUC。代码和相关信息可在https://github.com/NabaviLab/PCTM获取。 结论:我们提出的CNN-Transformer模型整合了先前和当前的图像,消除了长程依赖性,并增强了其进行细微分类的能力。FL的应用降低了假阳性率(FPR)和假阴性率(FNR),提高了SEN和SPE。此外,该模型在包括肿块、钙化和结构扭曲(ADs)在内的各种异常中实现了最低的错误发现率和FNR。这些低错误率突出了该模型的可靠性,并强调了其在临床实践中改善早期乳腺癌检测的潜力。
Comput Methods Programs Biomed. 2024-12
Diagnostics (Basel). 2025-1-25
Comput Biol Med. 2025-1
Comput Methods Programs Biomed. 2022-6
BMC Bioinformatics. 2020-12-9
BMC Med Imaging. 2024-12-18
Proc IEEE Int Symp Biomed Imaging. 2024-5
IEEE J Biomed Health Inform. 2023-3
Annu Int Conf IEEE Eng Med Biol Soc. 2022-7
CA Cancer J Clin. 2022-1
Comput Med Imaging Graph. 2022-1