Suppr超能文献

Vox-MMSD:用于自监督脑肿瘤分割的体素级多尺度多模态自蒸馏

Vox-MMSD: Voxel-wise Multi-scale and Multi-modal Self-Distillation for Self-supervised Brain Tumor Segmentation.

作者信息

Zhou Yubo, Wu Jianghao, Fu Jia, Yue Qiang, Liao Wenjun, Zhang Shichuan, Zhang Shaoting, Wang Guotai

出版信息

IEEE J Biomed Health Inform. 2025 Jul 24;PP. doi: 10.1109/JBHI.2025.3592116.

Abstract

Many deep learning methods have been proposed for brain tumor segmentation from multi-modal Magnetic Resonance Imaging (MRI) scans that are important for accurate diagnosis and treatment planning. However, supervised learning needs a large amount of labeled data to perform well, where the time-consuming and expensive annotation process or small size of training set will limit the model's performance. To deal with these problems, self-supervised pre-training is an appealing solution due to its feature learning ability from a set of unlabeled images that is transferable to downstream datasets with a small size. However, existing methods often overlook the utilization of multi-modal information and multi-scale features. Therefore, we propose a novel Self-Supervised Learning (SSL) framework that fully leverages multi-modal MRI scans to extract modality-invariant features for brain tumor segmentation. First, we employ a Siamese Block-wise Modality Masking (SiaBloMM) strategy that creates more diverse model inputs for image restoration to simultaneously learn contextual and modality-invariant features. Meanwhile, we proposed Overlapping Random Modality Sampling (ORMS) to sample voxel pairs with multi-scale features for self-distillation, enhancing voxel-wise representation which is important for segmentation tasks. Experiments on the BraTS 2024 adult glioma segmentation dataset showed that with a small amount of labeled data for fine-tuning, our method improved the average Dice by 3.80 percentage points. In addition, when transferred to three other small downstream datasets with brain tumors from different patient groups, our method also improved the dice by 3.47 percentage points on average, and outperformed several existing SSL methods. The code is availiable at https://github.com/HiLab-git/Vox-MMSD.

摘要

许多深度学习方法已被提出用于从多模态磁共振成像(MRI)扫描中进行脑肿瘤分割,这对于准确诊断和治疗规划至关重要。然而,监督学习需要大量的标注数据才能表现良好,耗时且昂贵的标注过程或小尺寸的训练集将限制模型的性能。为了解决这些问题,自监督预训练是一种有吸引力的解决方案,因为它能够从一组未标注图像中学习特征,并且这些特征可转移到小尺寸的下游数据集。然而,现有方法往往忽视了多模态信息和多尺度特征的利用。因此,我们提出了一种新颖的自监督学习(SSL)框架,该框架充分利用多模态MRI扫描来提取用于脑肿瘤分割的模态不变特征。首先,我们采用了一种连体式逐块模态掩蔽(SiaBloMM)策略,该策略为图像恢复创建更多样化的模型输入,以同时学习上下文和模态不变特征。同时,我们提出了重叠随机模态采样(ORMS),以对具有多尺度特征的体素对进行采样用于自蒸馏,增强对分割任务很重要的体素级表示。在BraTS 2024成人胶质瘤分割数据集上的实验表明,通过少量标注数据进行微调,我们的方法将平均Dice系数提高了3.80个百分点。此外,当转移到来自不同患者群体的其他三个小尺寸下游脑肿瘤数据集时,我们的方法平均也将Dice系数提高了3.47个百分点,并且优于几种现有的SSL方法。代码可在https://github.com/HiLab-git/Vox-MMSD获取。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验