Suppr超能文献

用于延迟钆增强磁共振成像和心脏电影磁共振成像的图像配准与分割的联合深度学习框架

Joint Deep Learning Framework for Image Registration and Segmentation of Late Gadolinium Enhanced MRI and Cine Cardiac MRI.

作者信息

Upendra Roshan Reddy, Simon Richard, Linte Cristian A

机构信息

Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA.

Biomedical Engineering, Rochester Institute of Technology, Rochester, NY, USA.

出版信息

Proc SPIE Int Soc Opt Eng. 2021 Feb;11598. doi: 10.1117/12.2581386. Epub 2021 Feb 15.

Abstract

Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) imaging, the current benchmark for assessment of myocardium viability, enables the identification and quantification of the compromised myocardial tissue regions, as they appear hyper-enhanced compared to the surrounding, healthy myocardium. However, in LGE CMR images, the reduced contrast between the left ventricle (LV) myocardium and LV blood-pool hampers the accurate delineation of the LV myocardium. On the other hand, the balanced-Steady State Free Precession (bSSFP) cine CMR imaging provides high resolution images ideal for accurate segmentation of the cardiac chambers. In the interest of generating patient-specific hybrid 3D and 4D anatomical models of the heart, to identify and quantify the compromised myocardial tissue regions for revascularization therapy planning, in our previous work, we presented a spatial transformer network (STN) based convolutional neural network (CNN) architecture for registration of LGE and bSSFP cine CMR image datasets made available through the 2019 Multi-Sequence Cardiac Magnetic Resonance Segmentation Challenge (MS-CMRSeg). We performed a supervised registration by leveraging the region of interest (RoI) information using the manual annotations of the LV blood-pool, LV myocardium and right ventricle (RV) blood-pool provided for both the LGE and the bSSFP cine CMR images. In order to reduce the reliance on the number of manual annotations for training such network, we propose a joint deep learning framework consisting of three branches: a STN based RoI guided CNN for registration of LGE and bSSFP cine CMR images, an U-Net model for segmentation of bSSFP cine CMR images, and an U-Net model for segmentation of LGE CMR images. This results in learning of a joint multi-scale feature encoder by optimizing all three branches of the network architecture simultaneously. Our experiments show that the registration results obtained by training 25 of the available 45 image datasets in a joint deep learning framework is comparable to the registration results obtained by stand-alone STN based CNN model by training 35 of the available 45 image datasets and also shows significant improvement in registration performance when compared to the results achieved by the stand-alone STN based CNN model by training 25 of the available 45 image datasets.

摘要

延迟钆增强(LGE)心脏磁共振(CMR)成像作为目前评估心肌活力的基准,能够识别和量化受损心肌组织区域,因为这些区域与周围健康心肌相比呈现高增强。然而,在LGE CMR图像中,左心室(LV)心肌与LV血池之间对比度降低,妨碍了LV心肌的准确勾画。另一方面,平衡稳态自由进动(bSSFP)电影CMR成像提供了高分辨率图像,非常适合心脏腔室的准确分割。为了生成针对患者的心脏三维和四维解剖模型,以识别和量化受损心肌组织区域用于血运重建治疗规划,在我们之前的工作中,我们提出了一种基于空间变换网络(STN)的卷积神经网络(CNN)架构,用于对通过2019年多序列心脏磁共振分割挑战赛(MS-CMRSeg)提供的LGE和bSSFP电影CMR图像数据集进行配准。我们通过利用左心室血池、左心室心肌和右心室(RV)血池的手动标注所提供的感兴趣区域(RoI)信息来进行监督配准,这些标注同时提供给LGE和bSSFP电影CMR图像。为了减少训练此类网络对手动标注数量的依赖,我们提出了一个联合深度学习框架,该框架由三个分支组成:一个基于STN的RoI引导CNN用于LGE和bSSFP电影CMR图像的配准,一个U-Net模型用于bSSFP电影CMR图像的分割,以及一个U-Net模型用于LGE CMR图像的分割。这通过同时优化网络架构的所有三个分支,实现了联合多尺度特征编码器的学习。我们的实验表明,在联合深度学习框架中训练45个可用图像数据集中的25个所获得的配准结果,与通过训练45个可用图像数据集中的35个基于独立STN的CNN模型所获得的配准结果相当,并且与通过训练45个可用图像数据集中的25个基于独立STN的CNN模型所取得的结果相比,在配准性能上有显著提升。

相似文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验