Suppr超能文献

关于为小脑分割明智选择扩散磁共振成像图像对比度的研究

Towards an Informed Choice of Diffusion MRI Image Contrasts for Cerebellar Segmentation.

作者信息

Legarreta Jon Haitz, Lan Zhou, Chen Yuqian, Zhang Fan, Yeterian Edward H, Makris Nikos, Rushmore Richard J, Rathi Yogesh, O'Donnell Lauren J

机构信息

Department of Radiology, Brigham and Women's Hospital, Mass General Brigham, Boston, Massachusetts, USA.

Harvard Medical School, Boston, Massachusetts, USA.

出版信息

Hum Brain Mapp. 2025 Aug 1;46(11):e70317. doi: 10.1002/hbm.70317.

Abstract

The fine-grained segmentation of cerebellar structures is an essential step towards supplying increasingly accurate anatomically informed analyses, including, for example, white matter diffusion magnetic resonance imaging (MRI) tractography. Cerebellar tissue segmentation is typically performed on structural MRI data, such as T1-weighted data, while connectivity between segmented regions is mapped using diffusion MRI tractography data. Small deviations in structural to diffusion MRI data co-registration may negatively impact connectivity analyses. Reliable segmentation of brain tissue performed directly on diffusion MRI data helps to circumvent such inaccuracies. Diffusion MRI enables the computation of many image contrasts, including a variety of tissue microstructure maps. While multiple methods have been proposed for the segmentation of cerebellar structures using diffusion MRI, little attention has been paid to the systematic evaluation of the performance of different available input image contrasts for the segmentation task. In this work, we evaluate and compare the segmentation performance of diffusion MRI-derived contrasts on the cerebellar segmentation task. Specifically, we include spherical mean (diffusion-weighted image average) and b0 (non-diffusion-weighted image average) contrasts, local signal parameterization contrasts (diffusion tensor and kurtosis fit maps), and the structural T1-weighted MRI contrast that is most commonly employed for the task. We train a popular deep-learning architecture using a publicly available dataset (HCP-YA) on a set of cerebellar white and gray matter region labels obtained from the atlas-based SUIT cerebellar segmentation pipeline employing T1-weighted data. By training and testing using many diffusion-MRI-derived image inputs, we find that the spherical mean image computed from b = 1000 s/mm shell data provides stable performance across different metrics and significantly outperforms the tissue microstructure contrasts that are traditionally used in machine learning segmentation methods for diffusion MRI.

摘要

小脑结构的细粒度分割是实现越来越精确的解剖学信息分析的关键步骤,例如包括白质扩散磁共振成像(MRI)纤维束成像。小脑组织分割通常在结构MRI数据(如T1加权数据)上进行,而分割区域之间的连通性则使用扩散MRI纤维束成像数据进行映射。结构MRI数据与扩散MRI数据配准中的微小偏差可能会对连通性分析产生负面影响。直接在扩散MRI数据上进行可靠的脑组织分割有助于避免此类不准确性。扩散MRI能够计算多种图像对比度,包括各种组织微观结构图谱。虽然已经提出了多种使用扩散MRI分割小脑结构的方法,但对于分割任务中不同可用输入图像对比度的性能系统评估却很少受到关注。在这项工作中,我们评估并比较了扩散MRI衍生对比度在小脑分割任务中的分割性能。具体而言,我们纳入了球形均值(扩散加权图像平均值)和b0(非扩散加权图像平均值)对比度、局部信号参数化对比度(扩散张量和峰度拟合图谱)以及该任务最常用的结构T1加权MRI对比度。我们使用一个公开可用的数据集(HCP-YA),基于从使用T1加权数据的基于图谱的SUIT小脑分割管道获得的一组小脑白质和灰质区域标签,训练一种流行的深度学习架构。通过使用许多扩散MRI衍生的图像输入进行训练和测试,我们发现从b = 1000 s/mm²壳数据计算得到的球形均值图像在不同指标上提供了稳定的性能,并且显著优于传统上用于扩散MRI机器学习分割方法的组织微观结构对比度。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验