Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
Med Image Anal. 2023 Aug;88:102840. doi: 10.1016/j.media.2023.102840. Epub 2023 May 16.
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (μ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived μ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived μ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived μ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and μ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.
单光子发射计算机断层扫描(SPECT)心肌灌注成像广泛应用于心血管疾病的诊断。从计算机断层扫描(CT)获得的衰减图(μ 图)用于衰减校正(AC)以提高心脏 SPECT 的诊断准确性。然而,在临床实践中,SPECT 和 CT 扫描是顺序采集的,这可能导致两幅图像之间的配准错误,并进一步产生 AC 伪影。由于两种成像方式可能呈现出完全不同的强度模式,基于强度的传统配准方法在 SPECT 和 CT 衍生的μ 图的跨模态配准中表现不佳。深度学习在医学图像配准中显示出巨大的潜力。然而,现有的基于深度学习的医学图像配准策略通过简单地将不同卷积层的特征图拼接起来对输入图像进行编码,这可能无法充分提取或融合输入信息。此外,以前没有研究过基于深度学习的心 SPECT 和 CT 衍生的μ 图的跨模态配准。在本文中,我们提出了一种新颖的双通道挤压融合激励(DuSFE)共同注意力模块,用于心 SPECT 和 CT 衍生的μ 图的刚性跨模态配准。DuSFE 是基于两个交叉连接输入数据流的共同注意力机制设计的。SPECT 和μ 图的通道或空间特征在 DuSFE 模块中进行联合编码、融合和重新校准。DuSFE 可以灵活地嵌入到多个卷积层中,以在不同的空间维度上实现逐渐的特征融合。我们使用临床患者 MPI 研究进行的研究表明,与现有方法相比,嵌入 DuSFE 的神经网络生成的注册误差显著更低,AC SPECT 图像更准确。我们还表明,嵌入 DuSFE 的网络不会过度校正或降低无运动病例的注册性能。该工作的源代码可在 https://github.com/XiongchaoChen/DuSFE_CrossRegistration 获得。