Suppr超能文献

一种用于从儿科患者的多次二维脑部扫描创建超分辨率磁共振成像的神经网络。

A neural network to create super-resolution MR from multiple 2D brain scans of pediatric patients.

作者信息

Benitez-Aurioles Jose, Osorio Eliana M Vásquez, Aznar Marianne C, Van Herk Marcel, Pan Shermaine, Sitch Peter, France Anna, Smith Ed, Davey Angela

机构信息

Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, UK.

Radiotherapy-Related Research Group, Division of Cancer Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK.

出版信息

Med Phys. 2025 Mar;52(3):1693-1705. doi: 10.1002/mp.17563. Epub 2024 Dec 10.

Abstract

BACKGROUND

High-resolution (HR) 3D MR images provide detailed soft-tissue information that is useful in assessing long-term side-effects after treatment in childhood cancer survivors, such as morphological changes in brain structures. However, these images require long acquisition times, so routinely acquired follow-up images after treatment often consist of 2D low-resolution (LR) images (with thick slices in multiple planes).

PURPOSE

In this work, we present a super-resolution convolutional neural network, based on previous single-image MRI super-resolution work, that can reconstruct a HR image from 2D LR slices in multiple planes in order to facilitate the extraction of structural biomarkers from routine scans.

METHODS

A multilevel densely connected super-resolution convolutional neural network (mDCSRN) was adapted to take two perpendicular LR scans (e.g., coronal and axial) as tensors and reconstruct a 3D HR image. A training set of 90 HR T1 pediatric head scans from the Adolescent Brain Cognitive Development (ABCD) study was used, with 2D LR images simulated through a downsampling pipeline that introduces motion artifacts, blurring, and registration errors to make the LR scans more realistic to routinely acquired ones. The outputs of the model were compared against simple interpolation in two steps. First, the quality of the reconstructed HR images was assessed using the peak signal-to-noise ratio and structural similarity index compared to baseline. Second, the precision of structure segmentation (using the autocontouring software Limbus AI) in the reconstructed versus the baseline HR images was assessed using mean distance-to-agreement (mDTA) and 95% Hausdorff distance. Three datasets were used: 10 new ABCD images (dataset 1), 18 images from the Children's Brain Tumor Network (CBTN) study (dataset 2) and 6 "real-world" follow-up images of a pediatric head and neck cancer patient (dataset 3).

RESULTS

The proposed mDCSRN outperformed simple interpolation in terms of visual quality. Similarly, structure segmentations were closer to baseline images after 3D reconstruction. The mDTA improved to, on average (95% confidence interval), 0.7 (0.4-1.0) and 0.8 (0.7-0.9) mm for datasets 1 and 3 respectively, from the interpolation performance of 6.5 (3.6-9.5) and 1.2 (1.0-1.3) mm.

CONCLUSIONS

We demonstrate that deep learning methods can successfully reconstruct 3D HR images from 2D LR ones, potentially unlocking datasets for retrospective study and advancing research in the long-term effects of pediatric cancer. Our model outperforms standard interpolation, both in perceptual quality and for autocontouring. Further work is needed to validate it for additional structural analysis tasks.

摘要

背景

高分辨率(HR)三维磁共振成像(MR)提供详细的软组织信息,有助于评估儿童癌症幸存者治疗后的长期副作用,如脑结构的形态变化。然而,这些图像采集时间长,因此治疗后常规采集的随访图像通常是二维低分辨率(LR)图像(多个平面的厚层图像)。

目的

在本研究中,我们基于之前的单图像MRI超分辨率研究,提出一种超分辨率卷积神经网络,该网络可以从多个平面的二维LR切片重建HR图像,以便于从常规扫描中提取结构生物标志物。

方法

采用多级密集连接超分辨率卷积神经网络(mDCSRN),将两个垂直的LR扫描(如冠状面和横断面)作为张量输入,重建三维HR图像。使用来自青少年脑认知发育(ABCD)研究的90例HR T1加权小儿头部扫描作为训练集,通过下采样管道模拟二维LR图像,引入运动伪影、模糊和配准误差,使LR扫描更接近常规采集的图像。分两步将模型输出与简单插值法进行比较。首先,通过与基线图像比较,使用峰值信噪比和结构相似性指数评估重建HR图像的质量。其次,使用平均距离一致性(mDTA)和95%豪斯多夫距离评估重建HR图像与基线HR图像中结构分割(使用自动轮廓软件Limbus AI)的精度。使用了三个数据集:10例新的ABCD图像(数据集1)、18例来自儿童脑肿瘤网络(CBTN)研究的图像(数据集2)和1例小儿头颈癌患者的6例“真实世界”随访图像(数据集3)。

结果

所提出的mDCSRN在视觉质量方面优于简单插值法。同样,三维重建后的结构分割更接近基线图像。对于数据集1和数据集3,mDTA平均分别提高到0.7(0.4 - 1.0)毫米和0.8(0.7 - 0.9)毫米,而简单插值法的结果分别为6.5(3.6 - 9.5)毫米和1.2(1.0 - 1.3)毫米。

结论

我们证明深度学习方法可以成功地从二维LR图像重建三维HR图像,这可能为回顾性研究解锁数据集,并推动小儿癌症长期影响的研究。我们的模型在感知质量和自动轮廓方面均优于标准插值法。需要进一步开展工作以验证其在其他结构分析任务中的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d73d/11880662/68c8a8621b14/MP-52-1693-g004.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验