Suppr超能文献

迈向用于体积器官分割的基础模型和少样本参数高效微调

Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation.

作者信息

Silva-Rodríguez Julio, Dolz Jose, Ben Ayed Ismail

机构信息

ÉTS Montréal, Québec, Canada.

ÉTS Montréal, Québec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CR-CHUM), Québec, Canada.

出版信息

Med Image Anal. 2025 May 2;103:103596. doi: 10.1016/j.media.2025.103596.

Abstract

The recent popularity of foundation models and the pre-train-and-adapt paradigm, where a large-scale model is transferred to downstream tasks, is gaining attention for volumetric medical image segmentation. However, current transfer learning strategies devoted to full fine-tuning for transfer learning may require significant resources and yield sub-optimal results when the labeled data of the target task is scarce. This makes its applicability in real clinical settings challenging since these institutions are usually constrained on data and computational resources to develop proprietary solutions. To address this challenge, we formalize Few-Shot Efficient Fine-Tuning (FSEFT), a novel and realistic scenario for adapting medical image segmentation foundation models. This setting considers the key role of both data- and parameter-efficiency during adaptation. Building on a foundation model pre-trained on open-access CT organ segmentation sources, we propose leveraging Parameter-Efficient Fine-Tuning and black-box Adapters to address such challenges. Furthermore, novel efficient adaptation methodologies are introduced in this work, which include Spatial black-box Adapters that are more appropriate for dense prediction tasks and constrained transductive inference, leveraging task-specific prior knowledge. Our comprehensive transfer learning experiments confirm the suitability of foundation models in medical image segmentation and unveil the limitations of popular fine-tuning strategies in few-shot scenarios. The project code is available: https://github.com/jusiro/fewshot-finetuning.

摘要

基础模型和预训练并适配范式(即将大规模模型迁移到下游任务)近来在容积医学图像分割领域受到关注。然而,当前致力于完全微调的迁移学习策略可能需要大量资源,并且在目标任务的标注数据稀缺时会产生次优结果。这使得其在实际临床环境中的适用性面临挑战,因为这些机构通常在数据和计算资源方面受限,难以开发专有解决方案。为应对这一挑战,我们将少样本高效微调(FSEFT)形式化,这是一种用于适配医学图像分割基础模型的新颖且现实的场景。此设置考虑了适配过程中数据效率和参数效率的关键作用。基于在公开可用的CT器官分割源上预训练的基础模型,我们提出利用参数高效微调及黑箱适配器来应对此类挑战。此外,本工作还引入了新颖的高效适配方法,其中包括更适合密集预测任务的空间黑箱适配器以及利用特定任务先验知识的受限转导推理。我们全面的迁移学习实验证实了基础模型在医学图像分割中的适用性,并揭示了少样本场景下流行微调策略的局限性。项目代码可在https://github.com/jusiro/fewshot-finetuning获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验