Suppr超能文献

基于生成式电影图像到标记图像数据集转换的迁移学习的标记磁共振图像心肌分割

Myocardial Segmentation of Tagged Magnetic Resonance Images with Transfer Learning Using Generative Cine-To-Tagged Dataset Transformation.

作者信息

Dhaene Arnaud P, Loecher Michael, Wilson Alexander J, Ennis Daniel B

机构信息

Department of Radiology, Stanford University, Stanford, CA 94305, USA.

Signal Processing Laboratory (LTS4), École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland.

出版信息

Bioengineering (Basel). 2023 Jan 28;10(2):166. doi: 10.3390/bioengineering10020166.

Abstract

The use of deep learning (DL) segmentation in cardiac MRI has the potential to streamline the radiology workflow, particularly for the measurement of myocardial strain. Recent efforts in DL motion tracking models have drastically reduced the time needed to measure the heart's displacement field and the subsequent myocardial strain estimation. However, the selection of initial myocardial reference points is not automated and still requires manual input from domain experts. Segmentation of the myocardium is a key step for initializing reference points. While high-performing myocardial segmentation models exist for images, this is not the case for tagged images. In this work, we developed and compared two novel DL models (nnU-net and Segmentation ResNet VAE) for the segmentation of myocardium from tagged CMR images. We implemented two methods to transform cardiac images into tagged images, allowing us to leverage large public annotated datasets. The cine-to-tagged methods included (i) a novel physics-driven transformation model, and (ii) a generative adversarial network (GAN) style transfer model. We show that pretrained models perform better (+2.8 Dice coefficient percentage points) and converge faster (6×) than models trained from scratch. The best-performing method relies on a pretraining with an unpaired, unlabeled, and structure-preserving generative model trained to transform images into their tagged-appearing equivalents. Our state-of-the-art myocardium segmentation network reached a Dice coefficient of 0.828 and 95th percentile Hausdorff distance of 4.745 mm on a held-out test set. This performance is comparable to existing state-of-the-art segmentation networks for images.

摘要

深度学习(DL)分割技术在心脏磁共振成像(CMR)中的应用有潜力简化放射学工作流程,特别是在心肌应变测量方面。DL运动跟踪模型的最新进展已大幅减少了测量心脏位移场及后续心肌应变估计所需的时间。然而,初始心肌参考点的选择并非自动完成,仍需领域专家手动输入。心肌分割是初始化参考点的关键步骤。虽然存在适用于普通图像的高性能心肌分割模型,但对于标记图像却并非如此。在这项工作中,我们开发并比较了两种用于从标记CMR图像中分割心肌的新型DL模型(nnU-net和分割残差网络变分自编码器(Segmentation ResNet VAE))。我们实施了两种将心脏普通图像转换为标记图像的方法,使我们能够利用大型公共注释数据集。电影图像到标记图像的方法包括:(i)一种新型物理驱动变换模型,以及(ii)一种生成对抗网络(GAN)风格迁移模型。我们表明,预训练模型比从头开始训练的模型表现更好(骰子系数提高2.8个百分点)且收敛更快(快6倍)。性能最佳的方法依赖于使用一个未配对、未标记且保留结构的生成模型进行预训练,该模型经训练可将普通图像转换为其类似标记图像的等效图像。我们的先进心肌分割网络在一个留出的测试集上达到了0.828的骰子系数和4.745毫米的第95百分位数豪斯多夫距离。这一性能与现有的用于普通图像的先进分割网络相当。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/42bb/9952238/e94546c9fb5e/bioengineering-10-00166-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验