Suppr超能文献

基于深度学习的 MRI 对比合成,使用全容积预测。

Deep learning based MRI contrast synthesis using full volume prediction using full volume prediction.

机构信息

Instituto de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politécnica de Valencia, Camino de Vera s/n, 46022, Spain.

CNRS, Univ. Bordeaux, Bordeaux INP, LaBRI, UMR5800, PICTURA Research Group, 351, cours de la Liberation F-33405 Talence cedex, France.

出版信息

Biomed Phys Eng Express. 2021 Dec 3;8(1). doi: 10.1088/2057-1976/ac3c64.

Abstract

In Magnetic Resonance Imaging (MRI), depending on the image acquisition settings, a large number of image types or contrasts can be generated showing complementary information of the same imaged subject. This multi-spectral information is highly beneficial since can improve MRI analysis tasks such as segmentation and registration, thanks to pattern ambiguity reduction. However, the acquisition of several contrasts is not always possible due to time limitations and patient comfort constraints. Contrast synthesis has emerged recently as an approximate solution to generate other image types different from those acquired originally. Most of the previously proposed methods for contrast synthesis are slice-based which result in intensity inconsistencies between neighbor slices when applied in 3D. We propose the use of a 3D convolutional neural network (CNN) capable of generating T2 and FLAIR images from a single anatomical T1 source volume. The proposed network is a 3D variant of the UNet that processes the whole volume at once breaking with the inconsistency in the resulting output volumes related to 2D slice or patch-based methods. Since working with a full volume at once has a huge memory demand we have introduced a spatial-to-depth and a reconstruction layer that allows working with the full volume but maintain the required network complexity to solve the problem. Our approach enhances the coherence in the synthesized volume while improving the accuracy thanks to the integrated three-dimensional context-awareness. Finally, the proposed method has been validated with a segmentation method, thus demonstrating its usefulness in a direct and relevant application.

摘要

在磁共振成像(MRI)中,根据图像采集设置,可以生成大量的图像类型或对比度,这些图像类型或对比度显示出同一成像对象的互补信息。这种多谱信息非常有益,因为它可以通过减少模式模糊来提高 MRI 分析任务(如分割和配准)的效果。然而,由于时间限制和患者舒适度的限制,并不是总能采集到几种对比度。对比合成最近作为一种生成与原始采集的图像类型不同的其他图像类型的近似解决方案出现。大多数以前提出的对比合成方法都是基于切片的,当应用于 3D 时,会导致相邻切片之间的强度不一致。我们提出使用能够从单个解剖 T1 源体积生成 T2 和 FLAIR 图像的 3D 卷积神经网络(CNN)。所提出的网络是 UNet 的 3D 变体,它一次处理整个体积,打破了与基于 2D 切片或补丁的方法相关的输出体积不一致的问题。由于一次处理整个体积会产生巨大的内存需求,我们引入了一个空间到深度的和一个重建层,允许处理整个体积,但保持解决问题所需的网络复杂度。我们的方法通过集成三维上下文感知来提高合成体积的一致性,同时提高准确性。最后,该方法通过与分割方法进行验证,从而证明了它在直接和相关应用中的有用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验