文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

基于混合变压器的模型,通过整合先前图像和当前图像进行乳房X光片分类。

Hybrid transformer-based model for mammogram classification by integrating prior and current images.

作者信息

Jeny Afsana Ahsan, Hamzehei Sahand, Jin Annie, Baker Stephen Andrew, Van Rathe Tucker, Bai Jun, Yang Clifford, Nabavi Sheida

机构信息

School of Computing, University of Connecticut, Storrs, Connecticut, USA.

Department of Radiology, UConn Health, Farmington, Connecticut, USA.

出版信息

Med Phys. 2025 May;52(5):2999-3014. doi: 10.1002/mp.17650. Epub 2025 Jan 30.


DOI:10.1002/mp.17650
PMID:39887755
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12082763/
Abstract

BACKGROUND: Breast cancer screening via mammography plays a crucial role in early detection, significantly impacting women's health outcomes worldwide. However, the manual analysis of mammographic images is time-consuming and requires specialized expertise, presenting substantial challenges in medical practice. PURPOSE: To address these challenges, we introduce a CNN-Transformer based model tailored for breast cancer classification through mammographic analysis. This model leverages both prior and current images to monitor temporal changes, aiming to enhance the efficiency and accuracy (ACC) of computer-aided diagnosis systems by mimicking the detailed examination process of radiologists. METHODS: In this study, our proposed model incorporates a novel integration of a position-wise feedforward network and multi-head self-attention, enabling it to detect abnormal or cancerous changes in mammograms over time. Additionally, the model employs positional encoding and channel attention methods to accurately highlight critical spatial features, thus precisely differentiating between normal and cancerous tissues. Our methodology utilizes focal loss (FL) to precisely address challenging instances that are difficult to classify, reducing false negatives and false positives to improve diagnostic ACC. RESULTS: We compared our model with eight baseline models; specifically, we utilized only current images for the single model ResNet50 while employing both prior and current images for the remaining models in terms of accuracy (ACC), sensitivity (SEN), precision (PRE), specificity (SPE), F1 score, and area under the curve (AUC). The results demonstrate that the proposed model outperforms the baseline models, achieving an ACC of 90.80%, SEN of 90.80%, PRE of 90.80%, SPE of 90.88%, an F1 score of 90.95%, and an AUC of 92.58%. The codes and related information are available at https://github.com/NabaviLab/PCTM. CONCLUSIONS: Our proposed CNN-Transformer model integrates both prior and current images, removes long-range dependencies, and enhances its capability for nuanced classification. The application of FL reduces false positive rate (FPR) and false negative rates (FNR), improving both SEN and SPE. Furthermore, the model achieves the lowest false discovery rate and FNR across various abnormalities, including masses, calcification, and architectural distortions (ADs). These low error rates highlight the model's reliability and underscore its potential to improve early breast cancer detection in clinical practice.

摘要

背景:通过乳房X线摄影进行乳腺癌筛查在早期检测中起着至关重要的作用,对全球女性的健康结果产生重大影响。然而,乳房X线图像的人工分析耗时且需要专业知识,在医疗实践中带来了巨大挑战。 目的:为应对这些挑战,我们引入了一种基于卷积神经网络(CNN)-Transformer的模型,通过乳房X线分析进行乳腺癌分类。该模型利用先前和当前的图像来监测时间变化,旨在通过模仿放射科医生的详细检查过程来提高计算机辅助诊断系统的效率和准确性(ACC)。 方法:在本研究中,我们提出的模型结合了位置-wise前馈网络和多头自注意力的新颖整合,使其能够随着时间检测乳房X线照片中的异常或癌变变化。此外,该模型采用位置编码和通道注意力方法来准确突出关键空间特征,从而精确区分正常组织和癌变组织。我们的方法利用焦点损失(FL)来精确处理难以分类的具有挑战性的实例,减少假阴性和假阳性以提高诊断ACC。 结果:我们将我们的模型与八个基线模型进行了比较;具体而言,对于单个模型ResNet50,我们仅使用当前图像,而对于其余模型,在准确性(ACC)、敏感性(SEN)、精确率(PRE)、特异性(SPE)、F1分数和曲线下面积(AUC)方面同时使用先前和当前图像。结果表明,所提出的模型优于基线模型,实现了90.80%的ACC、90.80%的SEN、90.80%的PRE、90.88%的SPE、90.95%的F1分数和92.58%的AUC。代码和相关信息可在https://github.com/NabaviLab/PCTM获取。 结论:我们提出的CNN-Transformer模型整合了先前和当前的图像,消除了长程依赖性,并增强了其进行细微分类的能力。FL的应用降低了假阳性率(FPR)和假阴性率(FNR),提高了SEN和SPE。此外,该模型在包括肿块、钙化和结构扭曲(ADs)在内的各种异常中实现了最低的错误发现率和FNR。这些低错误率突出了该模型的可靠性,并强调了其在临床实践中改善早期乳腺癌检测的潜力。

相似文献

[1]
Hybrid transformer-based model for mammogram classification by integrating prior and current images.

Med Phys. 2025-5

[2]
ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process.

Comput Methods Programs Biomed. 2024-12

[3]
Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms.

Med Phys. 2022-6

[4]
Enhanced Pneumonia Detection in Chest X-Rays Using Hybrid Convolutional and Vision Transformer Networks.

Curr Med Imaging. 2025

[5]
MammoViT: A Custom Vision Transformer Architecture for Accurate BIRADS Classification in Mammogram Analysis.

Diagnostics (Basel). 2025-1-25

[6]
Enhanced breast mass segmentation in mammograms using a hybrid transformer UNet model.

Comput Biol Med. 2025-1

[7]
Early detection and classification of abnormality in prior mammograms using image-to-image translation and YOLO techniques.

Comput Methods Programs Biomed. 2022-6

[8]
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.

Cancer Biomark. 2025-3

[9]
Convolutional neural network for automated mass segmentation in mammography.

BMC Bioinformatics. 2020-12-9

[10]
Segmentation for mammography classification utilizing deep convolutional neural network.

BMC Med Imaging. 2024-12-18

引用本文的文献

[1]
Innovative Multi-View Strategies for AI-Assisted Breast Cancer Detection in Mammography.

J Imaging. 2025-7-22

本文引用的文献

[1]
MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER.

Proc IEEE Int Symp Biomed Imaging. 2024-5

[2]
Class imbalance on medical image classification: towards better evaluation practices for discrimination and calibration performance.

Eur Radiol. 2024-12

[3]
VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography.

Sci Data. 2023-5-12

[4]
Convolutional Feature Descriptor Selection for Mammogram Classification.

IEEE J Biomed Health Inform. 2023-3

[5]
Vision Transformers for Classification of Breast Ultrasound Images.

Annu Int Conf IEEE Eng Med Biol Soc. 2022-7

[6]
Feature fusion Siamese network for breast cancer detection comparing current and prior mammograms.

Med Phys. 2022-6

[7]
Architectural Distortion-Based Digital Mammograms Classification Using Depth Wise Convolutional Neural Network.

Biology (Basel). 2021-12-23

[8]
Cancer statistics, 2022.

CA Cancer J Clin. 2022-1

[9]
Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation.

Comput Med Imaging Graph. 2022-1

[10]
A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images.

JAMA Netw Open. 2021-8-2

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索