Suppr超能文献

跨模态生成学习框架,用于从电阻抗断层成像(EIT)图像到解剖学磁共振成像(MRI)的转换。

Cross modality generative learning framework for anatomical transitive Magnetic Resonance Imaging (MRI) from Electrical Impedance Tomography (EIT) image.

机构信息

The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong.

The Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong.

出版信息

Comput Med Imaging Graph. 2023 Sep;108:102272. doi: 10.1016/j.compmedimag.2023.102272. Epub 2023 Jul 20.

Abstract

This paper presents a cross-modality generative learning framework for transitive magnetic resonance imaging (MRI) from electrical impedance tomography (EIT). The proposed framework is aimed at converting low-resolution EIT images to high-resolution wrist MRI images using a cascaded cycle generative adversarial network (CycleGAN) model. This model comprises three main components: the collection of initial EIT from the medical device, the generation of a high-resolution transitive EIT image from the corresponding MRI image for domain adaptation, and the coalescence of two CycleGAN models for cross-modality generation. The initial EIT image was generated at three different frequencies (70 kHz, 140 kHz, and 200 kHz) using a 16-electrode belt. Wrist T1-weighted images were acquired on a 1.5T MRI. A total of 19 normal volunteers were imaged using both EIT and MRI, which resulted in 713 paired EIT and MRI images. The cascaded CycleGAN, end-to-end CycleGAN, and Pix2Pix models were trained and tested on the same cohort. The proposed method achieved the highest accuracy in bone detection, with 0.97 for the proposed cascaded CycleGAN, 0.68 for end-to-end CycleGAN, and 0.70 for the Pix2Pix model. Visual inspection showed that the proposed method reduced bone-related errors in the MRI-style anatomical reference compared with end-to-end CycleGAN and Pix2Pix. Multifrequency EIT inputs reduced the testing normalized root mean squared error of MRI-style anatomical reference from 67.9% ± 12.7% to 61.4% ± 8.8% compared with that of single-frequency EIT. The mean conductivity values of fat and bone from regularized EIT were 0.0435 ± 0.0379 S/m and 0.0183 ± 0.0154 S/m, respectively, when the anatomical prior was employed. These results demonstrate that the proposed framework is able to generate MRI-style anatomical references from EIT images with a good degree of accuracy.

摘要

本文提出了一种跨模态生成学习框架,用于将磁共振成像(MRI)从电阻抗断层成像(EIT)转换。所提出的框架旨在使用级联循环生成对抗网络(CycleGAN)模型将低分辨率 EIT 图像转换为高分辨率手腕 MRI 图像。该模型由三个主要部分组成:从医疗设备收集初始 EIT,从相应的 MRI 图像生成高分辨率的转换 EIT 图像以进行域适应,以及融合两个 CycleGAN 模型进行跨模态生成。初始 EIT 图像是使用 16 电极带在三个不同频率(70 kHz、140 kHz 和 200 kHz)下生成的。手腕 T1 加权图像是在 1.5T MRI 上采集的。共有 19 名正常志愿者同时进行了 EIT 和 MRI 成像,共获得了 713 对 EIT 和 MRI 图像。在相同的队列上训练和测试了级联 CycleGAN、端到端 CycleGAN 和 Pix2Pix 模型。所提出的方法在骨检测方面达到了最高的准确性,对于提出的级联 CycleGAN 为 0.97,对于端到端 CycleGAN 为 0.68,对于 Pix2Pix 模型为 0.70。视觉检查表明,与端到端 CycleGAN 和 Pix2Pix 相比,所提出的方法减少了 MRI 式解剖参考中的与骨相关的误差。与单频 EIT 相比,多频 EIT 输入将 MRI 式解剖参考的测试归一化均方根误差从 67.9%±12.7%降低至 61.4%±8.8%。当使用解剖学先验时,正则化 EIT 的脂肪和骨的平均电导率值分别为 0.0435±0.0379 S/m 和 0.0183±0.0154 S/m。这些结果表明,所提出的框架能够以较高的精度从 EIT 图像生成 MRI 式解剖参考。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验