School of Information Science and Technology, Northwest University, Xi'an, Shaanxi, China.
College of Chemistry and Chemical Engineering, Xi'an Shiyou University, Xi'an, Shaanxi, China.
Sci Rep. 2024 Oct 6;14(1):23251. doi: 10.1038/s41598-024-73515-4.
Recently in diagnosis of Aortic dissection (AD), the synthesis of contrast enhanced CT (CE-CT) images from non-contrast CT (NC-CT) images is an important topic. Existing methods have achieved some results but are unable to synthesize a continuous and clear intimal flap on NC-CT images. In this paper, we propose a multi-stage cascade generative adversarial network (MCGAN) to explicitly capture the features of the intimal flap for a better synthesis of aortic dissection images. For the intimal flap with variable shapes and more detailed features, we extract features in two ways: dense residual attention blocks (DRAB) are integrated to extract shallow features and UNet is employed to extract deep features; then deep features and shallow features are cascaded and fused. For incomplete flaps or lack of details, we use spatial attention and channel attention to extract key features and locations. At the same time, multi-scale fusion is used to ensure the continuity of the intimal flap. We perform the experiment on a set of 124 patients (62 with AD and 62 without AD). The evaluation results show that the synthesized images have the same characteristics as the real images and achieves better results than the popular methods.
最近在主动脉夹层(AD)的诊断中,从非增强 CT(NC-CT)图像合成增强 CT(CE-CT)图像是一个重要的课题。现有的方法已经取得了一些成果,但无法在 NC-CT 图像上合成连续清晰的内膜瓣。在本文中,我们提出了一种多阶段级联生成对抗网络(MCGAN),以明确捕捉内膜瓣的特征,从而更好地合成主动脉夹层图像。对于形状和特征更复杂的内膜瓣,我们采用两种方式提取特征:密集残差注意力块(DRAB)用于提取浅层特征,UNet 用于提取深层特征;然后对深层特征和浅层特征进行级联融合。对于不完整的瓣叶或缺少细节的部分,我们使用空间注意力和通道注意力提取关键特征和位置。同时,采用多尺度融合来保证内膜瓣的连续性。我们在一组 124 名患者(62 名 AD 患者和 62 名非 AD 患者)上进行了实验。评估结果表明,合成图像与真实图像具有相同的特征,并且比流行方法取得了更好的效果。