Suppr超能文献

用于短轴 PET 图像质量增强的双分支神经网络。

A Two-Branch Neural Network for Short-Axis PET Image Quality Enhancement.

出版信息

IEEE J Biomed Health Inform. 2023 Jun;27(6):2864-2875. doi: 10.1109/JBHI.2023.3260180. Epub 2023 Jun 5.

Abstract

The axial field of view (FOV) is a key factor that affects the quality of PET images. Due to hardware FOV restrictions, conventional short-axis PET scanners with FOVs of 20 to 35 cm can acquire only low-quality PET (LQ-PET) images in fast scanning times (2-3 minutes). To overcome hardware restrictions and improve PET image quality for better clinical diagnoses, several deep learning-based algorithms have been proposed. However, these approaches use simple convolution layers with residual learning and local attention, which insufficiently extract and fuse long-range contextual information. To this end, we propose a novel two-branch network architecture with swin transformer units and graph convolution operation, namely SW-GCN. The proposed SW-GCN provides additional spatial- and channel-wise flexibility to handle different types of input information flow. Specifically, considering the high computational cost of calculating self-attention weights in full-size PET images, in our designed spatial adaptive branch, we take the self-attention mechanism within each local partition window and introduce global information interactions between nonoverlapping windows by shifting operations to prevent the aforementioned problem. In addition, the convolutional network structure considers the information in each channel equally during the feature extraction process. In our designed channel adaptive branch, we use a Watts Strogatz topology structure to connect each feature map to only its most relevant features in each graph convolutional layer, substantially reducing information redundancy. Moreover, ensemble learning is adopted in our SW-GCN for mapping distinct features from the two well-designed branches to the enhanced PET images. We carried out extensive experiments on three single-bed position scans for 386 patients. The test results demonstrate that our proposed SW-GCN approach outperforms state-of-the-art methods in both quantitative and qualitative evaluations.

摘要

轴向视野 (FOV) 是影响 PET 图像质量的关键因素。由于硬件 FOV 的限制,常规 FOV 为 20 到 35 厘米的短轴 PET 扫描仪在快速扫描时间(2-3 分钟)内只能获得低质量的 PET(LQ-PET)图像。为了克服硬件限制并提高 PET 图像质量以进行更好的临床诊断,已经提出了几种基于深度学习的算法。然而,这些方法使用具有残差学习和局部注意力的简单卷积层,无法充分提取和融合远距离上下文信息。为此,我们提出了一种具有 Swin 变换单元和图卷积操作的新型双分支网络架构,即 SW-GCN。所提出的 SW-GCN 提供了额外的空间和通道灵活性,以处理不同类型的输入信息流。具体来说,考虑到在全尺寸 PET 图像中计算自注意力权重的高计算成本,在我们设计的空间自适应分支中,我们在每个局部分区窗口内采用自注意力机制,并通过移位操作引入非重叠窗口之间的全局信息交互,以防止上述问题。此外,卷积网络结构在特征提取过程中平等地考虑每个通道的信息。在我们设计的通道自适应分支中,我们使用 Watts-Strogatz 拓扑结构将每个特征图连接到每个图卷积层中与其最相关的特征,大大减少了信息冗余。此外,我们在 SW-GCN 中采用了集成学习,以便将来自两个精心设计的分支的不同特征映射到增强的 PET 图像上。我们在三个单床位位置扫描的 386 名患者上进行了广泛的实验。测试结果表明,我们提出的 SW-GCN 方法在定量和定性评估方面均优于最先进的方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验