• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FAM:用于COVID-19 CT图像病变分割的焦点注意力模块。

FAM: focal attention module for lesion segmentation of COVID-19 CT images.

作者信息

Wu Xiaoxin, Zhang Zhihao, Guo Lingling, Chen Hui, Luo Qiaojie, Jin Bei, Gu Weiyan, Lu Fangfang, Chen Jingjing

机构信息

State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang China.

College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China.

出版信息

J Real Time Image Process. 2022;19(6):1091-1104. doi: 10.1007/s11554-022-01249-5. Epub 2022 Sep 4.

DOI:10.1007/s11554-022-01249-5
PMID:36091622
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9441194/
Abstract

The novel coronavirus pneumonia (COVID-19) is the world's most serious public health crisis, posing a serious threat to public health. In clinical practice, automatic segmentation of the lesion from computed tomography (CT) images using deep learning methods provides an promising tool for identifying and diagnosing COVID-19. To improve the accuracy of image segmentation, an attention mechanism is adopted to highlight important features. However, existing attention methods are of weak performance or negative impact to the accuracy of convolutional neural networks (CNNs) due to various reasons (e.g. low contrast of the boundary between the lesion and the surrounding, the image noise). To address this issue, we propose a novel focal attention module (FAM) for lesion segmentation of CT images. FAM contains a channel attention module and a spatial attention module. In the spatial attention module, it first generates rough spatial attention, a shape prior of the lesion region obtained from the CT image using median filtering and distance transformation. The rough spatial attention is then input into two 7 × 7 convolution layers for correction, achieving refined spatial attention on the lesion region. FAM is individually integrated with six state-of-the-art segmentation networks (e.g. UNet, DeepLabV3+, etc.), and then we validated these six combinations on the public dataset including COVID-19 CT images. The results show that FAM improve the Dice Similarity Coefficient (DSC) of CNNs by 2%, and reduced the number of false negatives (FN) and false positives (FP) up to 17.6%, which are significantly higher than that using other attention modules such as CBAM and SENet. Furthermore, FAM significantly improve the convergence speed of the model training and achieve better real-time performance. The codes are available at GitHub (https://github.com/RobotvisionLab/FAM.git).

摘要

新型冠状病毒肺炎(COVID-19)是全球最严重的公共卫生危机,对公众健康构成严重威胁。在临床实践中,使用深度学习方法从计算机断层扫描(CT)图像中自动分割病变为识别和诊断COVID-19提供了一个有前景的工具。为了提高图像分割的准确性,采用了注意力机制来突出重要特征。然而,由于各种原因(如病变与周围边界对比度低、图像噪声等),现有的注意力方法对卷积神经网络(CNN)的性能提升较弱或有负面影响。为了解决这个问题,我们提出了一种用于CT图像病变分割的新型焦点注意力模块(FAM)。FAM包含一个通道注意力模块和一个空间注意力模块。在空间注意力模块中,它首先生成粗略的空间注意力,即使用中值滤波和距离变换从CT图像中获得的病变区域的形状先验。然后将粗略的空间注意力输入到两个7×7卷积层进行校正,从而在病变区域实现精细的空间注意力。FAM分别与六个最先进的分割网络(如UNet、DeepLabV3+等)集成,然后我们在包括COVID-19 CT图像的公共数据集上对这六种组合进行了验证。结果表明,FAM将CNN的骰子相似系数(DSC)提高了2%,并将假阴性(FN)和假阳性(FP)的数量减少了多达17.6%,显著高于使用CBAM和SENet等其他注意力模块的情况。此外,FAM显著提高了模型训练的收敛速度,并实现了更好的实时性能。代码可在GitHub(https://github.com/RobotvisionLab/FAM.git)上获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/518a6cb24657/11554_2022_1249_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/048578f504ad/11554_2022_1249_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/f96e79df9aca/11554_2022_1249_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/36b9c502c512/11554_2022_1249_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/87a2a69c9d74/11554_2022_1249_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/a0dd659fa457/11554_2022_1249_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/0eb58eea4c25/11554_2022_1249_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/fa7acdfc506c/11554_2022_1249_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/be1716a63ef9/11554_2022_1249_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/51b2387f3b20/11554_2022_1249_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/c892bb86e1b7/11554_2022_1249_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/2db7966566d9/11554_2022_1249_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/518a6cb24657/11554_2022_1249_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/048578f504ad/11554_2022_1249_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/f96e79df9aca/11554_2022_1249_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/36b9c502c512/11554_2022_1249_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/87a2a69c9d74/11554_2022_1249_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/a0dd659fa457/11554_2022_1249_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/0eb58eea4c25/11554_2022_1249_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/fa7acdfc506c/11554_2022_1249_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/be1716a63ef9/11554_2022_1249_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/51b2387f3b20/11554_2022_1249_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/c892bb86e1b7/11554_2022_1249_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/2db7966566d9/11554_2022_1249_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9066/9441194/518a6cb24657/11554_2022_1249_Fig12_HTML.jpg

相似文献

1
FAM: focal attention module for lesion segmentation of COVID-19 CT images.FAM:用于COVID-19 CT图像病变分割的焦点注意力模块。
J Real Time Image Process. 2022;19(6):1091-1104. doi: 10.1007/s11554-022-01249-5. Epub 2022 Sep 4.
2
SK-Unet++: An improved Unet++ network with adaptive receptive fields for automatic segmentation of ultrasound thyroid nodule images.SK-Unet++:一种具有自适应感受野的改进型Unet++网络,用于超声甲状腺结节图像的自动分割。
Med Phys. 2024 Mar;51(3):1798-1811. doi: 10.1002/mp.16672. Epub 2023 Aug 22.
3
CARes-UNet: Content-aware residual UNet for lesion segmentation of COVID-19 from chest CT images.CARes-UNet:用于胸部 CT 图像中 COVID-19 病变分割的基于内容感知残差 UNet 模型。
Med Phys. 2021 Nov;48(11):7127-7140. doi: 10.1002/mp.15231. Epub 2021 Sep 25.
4
Dual attention fusion UNet for COVID-19 lesion segmentation from CT images.基于双注意力融合 U-Net 的 CT 图像新冠肺炎病灶分割。
J Xray Sci Technol. 2023;31(4):713-729. doi: 10.3233/XST-230001.
5
CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.CA-Net:用于可解释医学图像分割的综合注意力卷积神经网络。
IEEE Trans Med Imaging. 2021 Feb;40(2):699-711. doi: 10.1109/TMI.2020.3035253. Epub 2021 Feb 2.
6
CSAP-UNet: Convolution and self-attention paralleling network for medical image segmentation with edge enhancement.CSAP-UNet:用于医学图像分割的具有边缘增强的卷积和自注意力并行网络。
Comput Biol Med. 2024 Apr;172:108265. doi: 10.1016/j.compbiomed.2024.108265. Epub 2024 Mar 7.
7
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.
8
BPAT-UNet: Boundary preserving assembled transformer UNet for ultrasound thyroid nodule segmentation.BPAT-UNet:用于超声甲状腺结节分割的边界保持组装 Transformer UNet。
Comput Methods Programs Biomed. 2023 Aug;238:107614. doi: 10.1016/j.cmpb.2023.107614. Epub 2023 May 19.
9
Multi-Attention Segmentation Networks Combined with the Sobel Operator for Medical Images.多注意力分割网络结合 Sobel 算子在医学图像中的应用
Sensors (Basel). 2023 Feb 24;23(5):2546. doi: 10.3390/s23052546.
10
Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism.使用集成空间和通道注意力机制的U-Net进行新冠肺炎CT图像自动分割
Int J Imaging Syst Technol. 2021 Mar;31(1):16-27. doi: 10.1002/ima.22527. Epub 2020 Nov 24.

引用本文的文献

1
SAA-UNet: Spatial Attention and Attention Gate UNet for COVID-19 Pneumonia Segmentation from Computed Tomography.SAA-UNet:用于从计算机断层扫描中分割新冠肺炎肺炎的空间注意力和注意力门控UNet
Diagnostics (Basel). 2023 May 8;13(9):1658. doi: 10.3390/diagnostics13091658.
2
Computational Modeling and Experimental Characterization of Extrusion Printing into Suspension Baths.悬浮浴中挤出打印的计算建模与实验特性研究。
Adv Healthc Mater. 2022 Apr;11(7):e2101679. doi: 10.1002/adhm.202101679. Epub 2021 Nov 20.

本文引用的文献

1
MPS-Net: Multi-Point Supervised Network for CT Image Segmentation of COVID-19.MPS-Net:用于新冠病毒肺炎CT图像分割的多点监督网络
IEEE Access. 2021 Mar 19;9:47144-47153. doi: 10.1109/ACCESS.2021.3067047. eCollection 2021.
2
An Encoder-Decoder-Based Method for Segmentation of COVID-19 Lung Infection in CT Images.一种基于编码器-解码器的CT图像中COVID-19肺部感染分割方法。
SN Comput Sci. 2022;3(1):13. doi: 10.1007/s42979-021-00874-4. Epub 2021 Oct 25.
3
D2A U-Net: Automatic segmentation of COVID-19 CT slices based on dual attention and hybrid dilated convolution.
D2A U-Net:基于双重注意力和混合空洞卷积的 COVID-19 CT 切片自动分割。
Comput Biol Med. 2021 Aug;135:104526. doi: 10.1016/j.compbiomed.2021.104526. Epub 2021 Jun 2.
4
SCOAT-Net: A novel network for segmenting COVID-19 lung opacification from CT images.SCOAT-Net:一种用于从CT图像中分割新冠病毒肺炎肺部混浊区域的新型网络。
Pattern Recognit. 2021 Nov;119:108109. doi: 10.1016/j.patcog.2021.108109. Epub 2021 Jun 10.
5
Lung segmentation and automatic detection of COVID-19 using radiomic features from chest CT images.利用胸部CT图像的放射组学特征进行肺部分割及新冠病毒肺炎的自动检测
Pattern Recognit. 2021 Nov;119:108071. doi: 10.1016/j.patcog.2021.108071. Epub 2021 Jun 2.
6
Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism.使用集成空间和通道注意力机制的U-Net进行新冠肺炎CT图像自动分割
Int J Imaging Syst Technol. 2021 Mar;31(1):16-27. doi: 10.1002/ima.22527. Epub 2020 Nov 24.
7
CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image.CoSinGAN:从单张放射图像学习新冠病毒感染分割
Diagnostics (Basel). 2020 Nov 3;10(11):901. doi: 10.3390/diagnostics10110901.
8
A Noise-Robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions From CT Images.一种用于从 CT 图像中自动分割 COVID-19 肺炎病变的抗噪框架。
IEEE Trans Med Imaging. 2020 Aug;39(8):2653-2663. doi: 10.1109/TMI.2020.3000314.
9
Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images.Inf-Net:从 CT 图像自动进行 COVID-19 肺部感染分割。
IEEE Trans Med Imaging. 2020 Aug;39(8):2626-2637. doi: 10.1109/TMI.2020.2996645.
10
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.U-Net++:一种用于医学图像分割的嵌套U-Net架构。
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.