Czobit Cassandra, Samavi Reza
Electrical, Computer and Biomedical Engineering, Toronto Metropolitan University, Toronto, Canada.
Vector Institute for Artificial Intelligence, Toronto, Canada.
J Comput Biol. 2025 Jun;32(6):573-583. doi: 10.1089/cmb.2024.0635. Epub 2024 Dec 27.
Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class are limited. From the learning perspective, this process contributes to the data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features. In the case of generating additional neuroimages, it is advantageous to obtain unidentifiable medical data and augment smaller annotated datasets. This study proposes the development of a cycle-consistent generative adversarial network (CycleGAN) model for translating neuroimages from one field strength to another (e.g., 3 Tesla [T] to 1.5 T). This model was compared with a model based on a deep convolutional GAN model architecture. CycleGAN was able to generate the synthetic and reconstructed images with reasonable accuracy. The mapping function from the source (3 T) to the target domain (1.5 T) performed optimally with an average peak signal-to-noise ratio value of 25.69 ± 2.49 dB and a mean absolute error value of 2106.27 ± 1218.37. The codes for this study have been made publicly available in the following GitHub repository..
图像到图像的转换在医学领域已变得流行,用于将图像从一个域转换到另一个域。通过域转换进行医学图像合成具有优势,它能够扩充给定类别图像有限的图像数据集。从学习的角度来看,这个过程通过本质上拓宽模型对更多样化视觉数据的接触并使其能够学习更通用的特征,从而有助于模型的数据导向鲁棒性。在生成额外神经图像的情况下,获取不可识别的医学数据并扩充较小的标注数据集是有利的。本研究提出开发一种循环一致生成对抗网络(CycleGAN)模型,用于将神经图像从一种场强转换到另一种场强(例如,从3特斯拉[T]转换到1.5 T)。该模型与基于深度卷积GAN模型架构的模型进行了比较。CycleGAN能够以合理的准确性生成合成图像和重建图像。从源(3 T)到目标域(1.5 T)的映射函数表现最佳,平均峰值信噪比为25.69±2.49 dB,平均绝对误差值为2106.27±1218.37。本研究的代码已在以下GitHub存储库中公开提供。