Suppr超能文献

DCACNet:用于医学图像分割的双重上下文聚合和注意力引导的交叉去卷积网络。

DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation.

机构信息

School of Software, Xinjiang University, Urumqi, Xinjiang 830046, China; School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 610031, China.

School of Software, Xinjiang University, Urumqi, Xinjiang 830046, China.

出版信息

Comput Methods Programs Biomed. 2022 Feb;214:106566. doi: 10.1016/j.cmpb.2021.106566. Epub 2021 Nov 29.

Abstract

BACKGROUND AND OBJECTIVE

Segmentation is a key step in biomedical image analysis tasks. Recently, convolutional neural networks (CNNs) have been increasingly applied in the field of medical image processing; however, standard models still have some drawbacks. Due to the significant loss of spatial information at the coding stage, it is often difficult to restore the details of low-level visual features using simple deconvolution, and the generated feature maps are sparse, which results in performance degradation. This prompted us to study whether it is possible to better preserve the deep feature information of the image in order to solve the sparsity problem of image segmentation models.

METHODS

In this study, we (1) build a reliable deep learning network framework, named DCACNet, to improve the segmentation performance for medical images; (2) propose a multiscale cross-fusion encoding network to extract features; (3) build a dual context aggregation module to fuse the context features at different scales and capture more fine-grained deep features; and (4) propose an attention-guided cross deconvolution decoding network to generate dense feature maps. We demonstrate the effectiveness of the proposed method on two publicly available datasets.

RESULTS

DCACNet was trained and tested on the prepared dataset, and the experimental results show that our proposed model has better segmentation performance than previous models. For 4-class classification (CHAOS dataset), the mean DSC coefficient reached 91.03%. For 2-class classification (Herlev dataset), the accuracy, precision, sensitivity, specificity, and Dice score reached 96.77%, 90.40%, 94.20%, 97.50%, and 97.69%, respectively. The experimental results show that DCACNet can improve the segmentation effect for medical images.

CONCLUSION

DCACNet achieved promising results on the prepared dataset and improved segmentation performance. It can better retain the deep feature information of the image than other models and solve the sparsity problem of the medical image segmentation model.

摘要

背景与目的

分割是生物医学图像分析任务中的关键步骤。最近,卷积神经网络(CNN)在医学图像处理领域的应用越来越多;然而,标准模型仍然存在一些缺点。由于在编码阶段空间信息的显著丢失,使用简单的反卷积通常很难恢复低级视觉特征的细节,并且生成的特征图稀疏,导致性能下降。这促使我们研究是否有可能更好地保留图像的深层特征信息,以解决图像分割模型的稀疏性问题。

方法

在本研究中,我们(1)构建了一个可靠的深度学习网络框架,名为 DCACNet,以提高医学图像的分割性能;(2)提出了一种多尺度交叉融合编码网络来提取特征;(3)构建了一个双上下文聚合模块来融合不同尺度的上下文特征,以捕获更精细的深层特征;(4)提出了一种注意力引导的交叉反卷积解码网络来生成密集的特征图。我们在两个公开可用的数据集上验证了所提出方法的有效性。

结果

DCACNet 在准备好的数据集上进行了训练和测试,实验结果表明,我们提出的模型比以前的模型具有更好的分割性能。对于 4 类分类(CHAOS 数据集),平均 DSC 系数达到 91.03%。对于 2 类分类(Herlev 数据集),准确率、精确率、敏感度、特异性和 Dice 得分分别达到 96.77%、90.40%、94.20%、97.50%和 97.69%。实验结果表明,DCACNet 可以提高医学图像的分割效果。

结论

DCACNet 在准备好的数据集上取得了有希望的结果,并提高了分割性能。与其他模型相比,它可以更好地保留图像的深层特征信息,并解决医学图像分割模型的稀疏性问题。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验