• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 VGG 注意力机制的视觉Transformer 网络在乳腺超声图像良恶性分类中的应用。

A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images.

机构信息

School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.

School of computer Science and Engineering, Beihang University, Beijing, China.

出版信息

Med Phys. 2022 Sep;49(9):5787-5798. doi: 10.1002/mp.15852. Epub 2022 Jul 30.

DOI:10.1002/mp.15852
PMID:35866492
Abstract

PURPOSE

Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA-ViT) network to overcome their disadvantages.

METHODS

In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze-and-excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation.

RESULTS

Cross-validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA-ViT network achieved high accuracy (88.71 1.55%), recall (90.73 1.57%), specificity (85.58 3.35%), precision (90.77 1.98%), F1 score (90.73 1.24%), and Matthews correlation coefficient (MCC) (76.34 3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA-ViT had highest accuracy (81.72 2.99%), recall (64.45 2.96%), specificity (90.28 3.51%), precision (77.08 7.21%), F1 score (70.11 4.25%), and MCC (57.64 6.88%).

CONCLUSIONS

In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods.

摘要

目的

乳腺癌是全球最常见的癌症。超声反射成像技术可用于获取乳腺超声(BUS)图像,用于对良性和恶性肿瘤进行分类。然而,分类具有主观性,且依赖于操作人员和医生的经验和技能。自动分类方法可以协助医生并提高客观性,但当前的卷积神经网络(CNN)不擅长学习全局特征,而视觉变换器(ViT)不擅长提取局部特征。在本研究中,我们提出了一种视觉几何群注意力 ViT(VGGA-ViT)网络来克服它们的缺点。

方法

在提出的方法中,我们使用 CNN 模块提取局部特征,并使用 ViT 模块学习不同区域之间的全局关系并增强相关的局部特征。CNN 模块命名为 VGGA 模块。它由 VGG 骨干网、特征提取全连接层和挤压激励块组成。VGG 骨干网和 ViT 模块均在 ImageNet 数据集上进行预训练,并在本研究中使用 BUS 样本进行再训练。使用了两个 BUS 数据集进行验证。

结果

在两个 BUS 数据集上进行了交叉验证。对于数据集 A,所提出的 VGGA-ViT 网络实现了较高的准确率(88.71 1.55%)、召回率(90.73 1.57%)、特异性(85.58 3.35%)、精度(90.77 1.98%)、F1 分数(90.73 1.24%)和马修斯相关系数(MCC)(76.34 3.29%),优于本研究中所有比较的先前网络。数据集 B 被用作单独的测试集,测试结果表明 VGGA-ViT 具有最高的准确率(81.72 2.99%)、召回率(64.45 2.96%)、特异性(90.28 3.51%)、精度(77.08 7.21%)、F1 分数(70.11 4.25%)和 MCC(57.64 6.88%)。

结论

在本研究中,我们提出了用于 BUS 分类的 VGGA-ViT,它擅长学习局部和全局特征。所提出的网络比比较的先前方法具有更高的准确性。

相似文献

1
A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images.基于 VGG 注意力机制的视觉Transformer 网络在乳腺超声图像良恶性分类中的应用。
Med Phys. 2022 Sep;49(9):5787-5798. doi: 10.1002/mp.15852. Epub 2022 Jul 30.
2
Fus2Net: a novel Convolutional Neural Network for classification of benign and malignant breast tumor in ultrasound images.Fus2Net:一种用于超声图像中良性和恶性乳腺肿瘤分类的新型卷积神经网络。
Biomed Eng Online. 2021 Nov 18;20(1):112. doi: 10.1186/s12938-021-00950-z.
3
An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images.一种注意力监督的全分辨率残差网络,用于乳腺超声图像的分割。
Med Phys. 2020 Nov;47(11):5702-5714. doi: 10.1002/mp.14470. Epub 2020 Oct 6.
4
Squeeze-and-excitation-attention-based mobile vision transformer for grading recognition of bladder prolapse in pelvic MRI images.基于挤压-激励注意力机制的移动视觉Transformer 用于盆腔 MRI 图像中膀胱膨出分级识别。
Med Phys. 2024 Aug;51(8):5236-5249. doi: 10.1002/mp.17171. Epub 2024 May 20.
5
Deep learning-based immunohistochemical estimation of breast cancer via ultrasound image applications.基于深度学习通过超声图像应用对乳腺癌进行免疫组织化学评估。
Front Oncol. 2024 Jan 9;13:1263685. doi: 10.3389/fonc.2023.1263685. eCollection 2023.
6
A deep supervised transformer U-shaped full-resolution residual network for the segmentation of breast ultrasound image.一种用于乳腺超声图像分割的深度监督 Transformer U 形全分辨率残差网络。
Med Phys. 2023 Dec;50(12):7513-7524. doi: 10.1002/mp.16765. Epub 2023 Oct 10.
7
Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound.从视觉Transformer 集成中提取知识,提高乳腺超声分类的性能。
Acad Radiol. 2024 Jan;31(1):104-120. doi: 10.1016/j.acra.2023.08.006. Epub 2023 Sep 2.
8
Classification of multi-feature fusion ultrasound images of breast tumor within category 4 using convolutional neural networks.使用卷积神经网络对 4 类乳腺肿瘤多特征融合超声图像进行分类。
Med Phys. 2024 Jun;51(6):4243-4257. doi: 10.1002/mp.16946. Epub 2024 Mar 4.
9
BUS-Set: A benchmark for quantitative evaluation of breast ultrasound segmentation networks with public datasets.BUS-Set:一个用于使用公共数据集对乳腺超声分割网络进行定量评估的基准。
Med Phys. 2023 May;50(5):3223-3243. doi: 10.1002/mp.16287. Epub 2023 Feb 28.
10
Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach.基于多中心超声图像的乳腺肿瘤分类的空间和几何学习:一种混合学习方法。
BMC Med Imaging. 2024 Jun 5;24(1):133. doi: 10.1186/s12880-024-01307-3.

引用本文的文献

1
TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images.TMAN:用于乳腺超声图像细粒度分类的三重形态特征注意力网络
J Imaging Inform Med. 2025 Apr 8. doi: 10.1007/s10278-025-01496-5.
2
NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors.NMTNet:用于乳腺肿瘤联合分割与分类的多任务深度学习网络。
J Imaging Inform Med. 2025 Feb 19. doi: 10.1007/s10278-025-01440-7.
3
Classifying the molecular subtype of breast cancer using vision transformer and convolutional neural network features.
利用视觉Transformer和卷积神经网络特征对乳腺癌的分子亚型进行分类。
Breast Cancer Res Treat. 2025 Apr;210(3):771-782. doi: 10.1007/s10549-025-07614-9. Epub 2025 Jan 22.
4
Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network.基于混合多任务卷积神经网络-Transformer网络的乳腺超声肿瘤分类
Med Image Comput Comput Assist Interv. 2023 Oct;14223:344-353. doi: 10.1007/978-3-031-43901-8_33. Epub 2023 Oct 1.
5
Auditory Brainstem Response Data Preprocessing Method for the Automatic Classification of Hearing Loss Patients.用于听力损失患者自动分类的听觉脑干反应数据预处理方法
Diagnostics (Basel). 2023 Nov 27;13(23):3538. doi: 10.3390/diagnostics13233538.
6
Rapid Segmentation and Diagnosis of Breast Tumor Ultrasound Images at the Sonographer Level Using Deep Learning.使用深度学习在超声检查医师水平上对乳腺肿瘤超声图像进行快速分割与诊断。
Bioengineering (Basel). 2023 Oct 19;10(10):1220. doi: 10.3390/bioengineering10101220.
7
ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images.ETECADx:用于使用全场数字化乳腺X线摄影图像进行乳腺癌诊断的集成自注意力Transformer编码器
Diagnostics (Basel). 2022 Dec 28;13(1):89. doi: 10.3390/diagnostics13010089.
8
A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms.一种基于数字乳腺X线摄影的用于乳腺癌分类的残差卷积变压器编码器混合工作流程。
Biomedicines. 2022 Nov 18;10(11):2971. doi: 10.3390/biomedicines10112971.