Suppr超能文献

UCATR:基于 CNN 和 Transformer 编码及交叉注意力解码的非对比 CT 图像急性缺血性脑卒中病灶分割

UCATR: Based on CNN and Transformer Encoding and Cross-Attention Decoding for Lesion Segmentation of Acute Ischemic Stroke in Non-contrast Computed Tomography Images.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:3565-3568. doi: 10.1109/EMBC46164.2021.9630336.

Abstract

The acute ischemic stroke (AIS) impacts extensively all over the world, the early diagnosis can provide valuable property information of disease. However, it's difficult for our human eyes to distinguish the fine pathological changes. Here we introduce self-attention mechanisms and propose UCATR, an NCCT image segmentation network for AIS lesions. It uses the advantages of Transformer to effectively learn the global context features of the image, and is based on convolutional neural network (CNN) and Transformer as the encoder, adding Multi-Head Cross-Attention (MHCA) modules to the decoder to achieve high-precision spatial information recovery. This method is experimentally verified on the NCCT dataset of AIS provided by Chengdu Medical College in China to obtain that the Dice similarity coefficient of lesion segmentation is 73.58%, which is better than U-Net, Attention U-Net and TransUNet. Furthermore, we conduct ablation study on the MHCA module at three different positions in the decoder to prove its efficiency.

摘要

急性缺血性脑卒中(AIS)在全球范围内广泛影响,早期诊断可以提供有价值的疾病特征信息。然而,我们的人眼很难分辨出细微的病理变化。在这里,我们引入了自注意力机制,并提出了 UCATR,一种用于 AIS 病变的 NCCT 图像分割网络。它利用 Transformer 的优势,有效地学习图像的全局上下文特征,并基于卷积神经网络(CNN)和 Transformer 作为编码器,在解码器中添加多头交叉注意力(MHCA)模块,以实现高精度的空间信息恢复。该方法在中国成都医学院提供的 AIS 的 NCCT 数据集上进行了实验验证,结果表明病变分割的 Dice 相似系数为 73.58%,优于 U-Net、Attention U-Net 和 TransUNet。此外,我们在解码器的三个不同位置对 MHCA 模块进行了消融研究,以证明其有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验