Suppr超能文献

基于视觉Transformer和卷积神经网络的肺部质子和超极化气体MRI稳健分割:人工噪声下性能的比较分析

Robust Segmentation of Lung Proton and Hyperpolarized Gas MRI with Vision Transformers and CNNs: A Comparative Analysis of Performance Under Artificial Noise.

作者信息

Babaeipour Ramtin, Fox Matthew S, Parraga Grace, Ouriadov Alexei

机构信息

School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada.

Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada.

出版信息

Bioengineering (Basel). 2025 Jul 28;12(8):808. doi: 10.3390/bioengineering12080808.

Abstract

Accurate segmentation in medical imaging is essential for disease diagnosis and monitoring, particularly in lung imaging using proton and hyperpolarized gas MRI. However, image degradation due to noise and artifacts-especially in hyperpolarized gas MRI, where scans are acquired during breath-holds-poses challenges for conventional segmentation algorithms. This study evaluates the robustness of deep learning segmentation models under varying Gaussian noise levels, comparing traditional convolutional neural networks (CNNs) with modern Vision Transformer (ViT)-based models. Using a dataset of proton and hyperpolarized gas MRI slices from 56 participants, we trained and tested Feature Pyramid Network (FPN) and U-Net architectures with both CNN (VGG16, VGG19, ResNet152) and ViT (MiT-B0, B3, B5) backbones. Results showed that ViT-based models, particularly those using the SegFormer backbone, consistently outperformed CNN-based counterparts across all metrics and noise levels. The performance gap was especially pronounced in high-noise conditions, where transformer models retained higher Dice scores and lower boundary errors. These findings highlight the potential of ViT-based architectures for deployment in clinically realistic, low-SNR environments such as hyperpolarized gas MRI, where segmentation reliability is critical.

摘要

医学成像中的精确分割对于疾病诊断和监测至关重要,特别是在使用质子和超极化气体MRI的肺部成像中。然而,由于噪声和伪影导致的图像退化——尤其是在超极化气体MRI中,扫描是在屏气期间进行的——给传统分割算法带来了挑战。本研究评估了深度学习分割模型在不同高斯噪声水平下的鲁棒性,将传统卷积神经网络(CNN)与基于现代视觉Transformer(ViT)的模型进行了比较。使用来自56名参与者的质子和超极化气体MRI切片数据集,我们使用CNN(VGG16、VGG19、ResNet152)和ViT(MiT-B0、B3、B5)主干训练并测试了特征金字塔网络(FPN)和U-Net架构。结果表明,基于ViT的模型,特别是那些使用SegFormer主干的模型,在所有指标和噪声水平上始终优于基于CNN的对应模型。在高噪声条件下,性能差距尤为明显,其中Transformer模型保持了更高的Dice分数和更低的边界误差。这些发现凸显了基于ViT的架构在临床实际、低信噪比环境(如超极化气体MRI)中部署的潜力,在这种环境中分割可靠性至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/288b/12383719/ec67524f8a3a/bioengineering-12-00808-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验