Suppr超能文献

在多扫描仪、多模态、质量各异的磁共振成像中对内侧颞叶亚区域进行自动分割。

Automatic segmentation of medial temporal lobe subregions in multi-scanner, multi-modality MRI of variable quality.

作者信息

Li Yue, Xie Long, Khandelwal Pulkit, Wisse Laura E M, Brown Christopher A, Prabhakaran Karthik, Dylan Tisdall M, Mechanic-Hamilton Dawn, Detre John A, Das Sandhitsu R, Wolk David A, Yushkevich Paul A

机构信息

Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, USA.

Department of Radiology, University of Pennsylvania, Philadelphia, USA.

出版信息

bioRxiv. 2024 May 23:2024.05.21.595190. doi: 10.1101/2024.05.21.595190.

Abstract

BACKGROUND

Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer's disease. However, image quality may vary in MRI. Poor quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as "modalities" here) offer distinct advantages in imaging different parts of the MTL, we developed a muti-modality segmentation model using both 7 tesla (7T) and 3 tesla (3T) structural MRI to obtain robust segmentation in poor-quality images.

METHOD

MRI modalities including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn Alzheimer's Disease Research Center. Among them, 7T-T2w was used as the primary modality, and all other modalities were rigidly registered to the 7T-T2w. A model derived from nnU-Net took these registered modalities as input and outputted subregion segmentation in 7T-T2w space. 7T-T2w images most of which had high quality from 25 selected training participants were manually segmented to train the multi-modality model. Modality augmentation, which randomly replaced certain modalities with Gaussian noise, was applied during training to guide the model to extract information from all modalities. To compare our proposed model with a baseline single-modality model in the full dataset with mixed high/poor image quality, we evaluated the ability of derived volume/thickness measures to discriminate Amyloid+ mild cognitive impairment (A+MCI) and Amyloid- cognitively unimpaired (A-CU) groups, as well as the stability of these measurements in longitudinal data.

RESULTS

The multi-modality model delivered good performance regardless of 7T-T2w quality, while the single-modality model under-segmented subregions in poor-quality images. The multi-modality model generally demonstrated stronger discrimination of A+MCI versus A-CU. Intra-class correlation and Bland-Altman plots demonstrate that the multi-modality model had higher longitudinal segmentation consistency in all subregions while the single-modality model had low consistency in poor-quality images.

CONCLUSION

The multi-modality MRI segmentation model provides an improved biomarker for neurodegeneration in the MTL that is robust to image quality. It also provides a framework for other studies which may benefit from multimodal imaging.

摘要

背景

通过磁共振成像(MRI)自动分割计算得出的内侧颞叶(MTL)各亚区域体积,可追踪阿尔茨海默病中的神经退行性变。然而,MRI图像质量可能存在差异。质量较差的MR图像会导致MTL亚区域分割不可靠。鉴于不同的MRI对比机制和场强(在此统称为“模态”)在成像MTL不同部位时具有不同优势,我们开发了一种多模态分割模型,使用7特斯拉(7T)和3特斯拉(3T)结构MRI来在质量较差的图像中获得可靠的分割结果。

方法

从宾夕法尼亚阿尔茨海默病研究中心的一项纵向衰老研究中收集了197名参与者的MRI模态数据,包括3T T1加权、3T T2加权、7T T1加权和7T T2加权(7T-T2w)图像。其中,7T-T2w被用作主要模态,所有其他模态都被刚性配准到7T-T2w。一个源自nnU-Net的模型将这些配准后的模态作为输入,并输出7T-T2w空间中的亚区域分割结果。从25名选定的训练参与者中选取了大部分高质量的7T-T2w图像进行手动分割,以训练多模态模型。在训练过程中应用了模态增强,即随机用高斯噪声替换某些模态,以引导模型从所有模态中提取信息。为了在具有混合高/低图像质量的完整数据集中将我们提出的模型与基线单模态模型进行比较,我们评估了导出的体积/厚度测量值区分淀粉样蛋白阳性轻度认知障碍(A+MCI)和淀粉样蛋白阴性认知未受损(A-CU)组的能力,以及这些测量值在纵向数据中的稳定性。

结果

无论7T-T2w质量如何,多模态模型都表现出良好的性能,而单模态模型在质量较差的图像中对亚区域的分割不足。多模态模型通常对A+MCI与A-CU的区分能力更强。类内相关性和布兰德-奥特曼图表明,多模态模型在所有亚区域中具有更高的纵向分割一致性,而单模态模型在质量较差的图像中一致性较低。

结论

多模态MRI分割模型为MTL中的神经退行性变提供了一种改进的生物标志物,对图像质量具有鲁棒性。它还为其他可能受益于多模态成像的研究提供了一个框架。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4522/11142184/afd83567adbc/nihpp-2024.05.21.595190v1-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验