• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于语义分割的注意力增强型U-Net(AA-U-Net)。

Attention-augmented U-Net (AA-U-Net) for semantic segmentation.

作者信息

Rajamani Kumar T, Rani Priya, Siebert Hanna, ElagiriRamalingam Rajkumar, Heinrich Mattias P

机构信息

Philips Research, Bangalore, India.

Applied Artificial Intelligence Institute, Deakin University, Burwood, VIC 3125 Australia.

出版信息

Signal Image Video Process. 2023;17(4):981-989. doi: 10.1007/s11760-022-02302-3. Epub 2022 Jul 25.

DOI:10.1007/s11760-022-02302-3
PMID:35910403
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9311338/
Abstract

UNLABELLED

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

摘要

未标注

基于深度学习的图像分割模型在不依赖难以用有限标注数据进行训练的复杂模型的情况下,强烈依赖于捕获足够的空间上下文信息。对于CT图像上的新冠肺炎感染分割,目前训练数据稀缺。注意力模型,特别是最新的自注意力方法,已被证明有助于在深度网络中收集上下文信息,并有益于语义分割任务。最近的注意力增强卷积模型旨在通过拼接自注意力和卷积特征图来捕获长距离交互。这项工作提出了一种新颖的注意力增强卷积U-Net(AA-U-Net),通过在编码器-解码器分割架构的瓶颈中集成注意力增强卷积,实现上下文信息更准确的空间聚合。具有这种注意力机制的深度分割网络(U-Net)在具有挑战性的新冠肺炎病变分割上显著提高了语义分割任务的性能。验证实验表明,注意力增强U-Net的性能提升来自于其捕获动态和精确(更广泛)注意力上下文的能力。对于新冠肺炎分割,AA-U-Net在磨玻璃影和实变病变上的Dice分数分别达到72.3%和61.4%,相对于基线U-Net提高了4.2个百分点,相对于具有匹配参数的基线U-Net提高了3.09个百分点。

补充信息

在线版本包含可在10.1007/s11760-022-02302-3获取的补充材料。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/7bd1c50812b6/11760_2022_2302_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/38d9efa63529/11760_2022_2302_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/f79d8fe14981/11760_2022_2302_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/7bd1c50812b6/11760_2022_2302_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/38d9efa63529/11760_2022_2302_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/f79d8fe14981/11760_2022_2302_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5658/9311338/7bd1c50812b6/11760_2022_2302_Fig3_HTML.jpg

相似文献

1
Attention-augmented U-Net (AA-U-Net) for semantic segmentation.用于语义分割的注意力增强型U-Net(AA-U-Net)。
Signal Image Video Process. 2023;17(4):981-989. doi: 10.1007/s11760-022-02302-3. Epub 2022 Jul 25.
2
Dynamic deformable attention network (DDANet) for COVID-19 lesions semantic segmentation.用于新冠肺炎病变语义分割的动态可变形注意力网络(DDANet)
J Biomed Inform. 2021 Jul;119:103816. doi: 10.1016/j.jbi.2021.103816. Epub 2021 May 20.
3
Deformable attention (DANet) for semantic image segmentation.可变形注意力网络(DANet)用于语义图像分割。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3781-3784. doi: 10.1109/EMBC48229.2022.9871439.
4
A multiple-channel and atrous convolution network for ultrasound image segmentation.一种用于超声图像分割的多通道多孔卷积网络。
Med Phys. 2020 Dec;47(12):6270-6285. doi: 10.1002/mp.14512. Epub 2020 Oct 18.
5
A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.一种用于深度学习的新型自适应三次拟牛顿优化器,在 COVID-19 检测和 COVID-19 肺部感染、肝脏肿瘤以及视盘/杯分割等医学图像分析任务中得到验证。
Med Phys. 2023 Mar;50(3):1528-1538. doi: 10.1002/mp.15969. Epub 2022 Oct 6.
6
Dual attention fusion UNet for COVID-19 lesion segmentation from CT images.基于双注意力融合 U-Net 的 CT 图像新冠肺炎病灶分割。
J Xray Sci Technol. 2023;31(4):713-729. doi: 10.3233/XST-230001.
7
CAM-Wnet: An effective solution for accurate pulmonary embolism segmentation.CAM-Wnet:一种用于准确肺栓塞分割的有效解决方案。
Med Phys. 2022 Aug;49(8):5294-5303. doi: 10.1002/mp.15719. Epub 2022 Jun 21.
8
IBA-U-Net: Attentive BConvLSTM U-Net with Redesigned Inception for medical image segmentation.IBA-U-Net:具有重新设计的 Inception 的注意力 BConvLSTM U-Net 用于医学图像分割。
Comput Biol Med. 2021 Aug;135:104551. doi: 10.1016/j.compbiomed.2021.104551. Epub 2021 Jun 12.
9
RSU-Net: U-net based on residual and self-attention mechanism in the segmentation of cardiac magnetic resonance images.RSU-Net:基于残差和自注意力机制的 U-net 在心脏磁共振图像分割中的应用。
Comput Methods Programs Biomed. 2023 Apr;231:107437. doi: 10.1016/j.cmpb.2023.107437. Epub 2023 Feb 21.
10
EA-Net: Research on skin lesion segmentation method based on U-Net.EA-Net:基于U-Net的皮肤病变分割方法研究
Heliyon. 2023 Nov 22;9(12):e22663. doi: 10.1016/j.heliyon.2023.e22663. eCollection 2023 Dec.

引用本文的文献

1
A plaque recognition algorithm for coronary OCT images by Dense Atrous Convolution and attention mechanism.一种基于密集空洞卷积和注意力机制的冠状动脉光学相干断层扫描(OCT)图像斑块识别算法。
PLoS One. 2025 Jun 10;20(6):e0325911. doi: 10.1371/journal.pone.0325911. eCollection 2025.
2
Attention U-Net-based semantic segmentation for welding line detection.基于注意力U-Net的焊缝检测语义分割
Sci Rep. 2025 May 1;15(1):15276. doi: 10.1038/s41598-025-00257-2.
3
Feasibility study of AI-assisted multi-parameter MRI diagnosis of prostate cancer.
人工智能辅助多参数磁共振成像诊断前列腺癌的可行性研究
Sci Rep. 2025 Mar 27;15(1):10530. doi: 10.1038/s41598-024-84516-8.
4
Dual-channel compression mapping network with fused attention mechanism for medical image segmentation.用于医学图像分割的具有融合注意力机制的双通道压缩映射网络
Sci Rep. 2025 Mar 14;15(1):8906. doi: 10.1038/s41598-025-93494-4.
5
Deep-learning-based pyramid-transformer for localized porosity analysis of hot-press sintered ceramic paste.基于深度学习的金字塔-Transformer 用于热压烧结陶瓷糊剂的局部孔隙率分析。
PLoS One. 2024 Sep 4;19(9):e0306385. doi: 10.1371/journal.pone.0306385. eCollection 2024.
6
A Comparative Study of Deep Learning Dose Prediction Models for Cervical Cancer Volumetric Modulated Arc Therapy.深度学习剂量预测模型在宫颈癌容积旋转调强放疗中的对比研究。
Technol Cancer Res Treat. 2024 Jan-Dec;23:15330338241242654. doi: 10.1177/15330338241242654.
7
Dense monocular depth estimation for stereoscopic vision based on pyramid transformer and multi-scale feature fusion.基于金字塔变换器和多尺度特征融合的立体视觉密集单目深度估计
Sci Rep. 2024 Mar 25;14(1):7037. doi: 10.1038/s41598-024-57908-z.
8
Automatic Segmentation and Quantification of Abdominal Aortic Calcification in Lateral Lumbar Radiographs Based on Deep-Learning-Based Algorithms.基于深度学习算法的腰椎侧位X线片腹主动脉钙化的自动分割与量化
Bioengineering (Basel). 2023 Oct 5;10(10):1164. doi: 10.3390/bioengineering10101164.
9
A robust semantic lung segmentation study for CNN-based COVID-19 diagnosis.一项针对基于卷积神经网络的COVID-19诊断的强大语义肺部分割研究。
Chemometr Intell Lab Syst. 2022 Dec 15;231:104695. doi: 10.1016/j.chemolab.2022.104695. Epub 2022 Oct 22.