Kawahara Daisuke, Yoshimura Hisanori, Matsuura Takaaki, Saito Akito, Nagata Yasushi
Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
Department of Radiology, National Hospital Organization Kure Medical Center, Hiroshima, 737-0023, Japan.
Phys Eng Sci Med. 2023 Mar;46(1):313-323. doi: 10.1007/s13246-023-01220-z. Epub 2023 Jan 30.
This study aims to synthesize fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted images (DWI) with a deep conditional adversarial network from T1- and T2-weighted magnetic resonance imaging (MRI) images. A total of 1980 images of 102 patients were split into two datasets: 1470 (68 patients) in a training set and 510 (34 patients) in a test set. The prediction framework was based on a convolutional neural network with a generator and discriminator. T1-weighted, T2-weighted, and composite images were used as inputs. The digital imaging and communications in medicine (DICOM) images were converted to 8-bit red-green-blue images. The red and blue channels of the composite images were assigned to 8-bit grayscale pixel values in T1-weighted images, and the green channel was assigned to those in T2-weighted images. The prediction FLAIR and DWI images were of the same objects as the inputs. For the results, the prediction model with composite MRI input images in the DWI image showed the smallest relative mean absolute error (rMAE) and largest mutual information (MI), and that in the FLAIR image showed the largest relative mean-square error (rMSE), relative root-mean-square error (rRMSE), and peak signal-to-noise ratio (PSNR). For the FLAIR image, the prediction model with the T2-weighted MRI input images generated more accurate synthesis results than that with the T1-weighted inputs. The proposed image synthesis framework can improve the versatility and quality of multi-contrast MRI without extra scans. The composite input MRI image contributes to synthesizing the multi-contrast MRI image efficiently.
本研究旨在利用深度条件对抗网络,从T1加权和T2加权磁共振成像(MRI)图像中合成液体衰减反转恢复(FLAIR)图像和扩散加权图像(DWI)。将102例患者的1980幅图像分为两个数据集:训练集中有1470幅(68例患者),测试集中有510幅(34例患者)。预测框架基于带有生成器和判别器的卷积神经网络。使用T1加权、T2加权和合成图像作为输入。医学数字成像和通信(DICOM)图像被转换为8位红-绿-蓝图像。合成图像的红色和蓝色通道被分配给T1加权图像中的8位灰度像素值,绿色通道被分配给T2加权图像中的8位灰度像素值。预测的FLAIR图像和DWI图像与输入图像的对象相同。结果显示,在DWI图像中,以合成MRI输入图像的预测模型具有最小的相对平均绝对误差(rMAE)和最大的互信息(MI);在FLAIR图像中,该模型具有最大的相对均方误差(rMSE)、相对均方根误差(rRMSE)和峰值信噪比(PSNR)。对于FLAIR图像,以T2加权MRI输入图像的预测模型比以T1加权输入图像的模型生成的合成结果更准确。所提出的图像合成框架可以在无需额外扫描的情况下提高多对比度MRI的通用性和质量。合成输入MRI图像有助于高效地合成多对比度MRI图像。