Suppr超能文献

一种通过解耦主权重方向和大小实现脑结构与转移灶分割的新型低秩自适应方法。

A new low-rank adaptation method for brain structure and metastasis segmentation via decoupled principal weight direction and magnitude.

作者信息

Zhu Hancan, Yang Hongxia, Wang Yaqing, Hu Keli, He Guanghua, Zhou Jia, Li Zhong

机构信息

School of Mathematics, Physics and Information, Shaoxing University, 900 ChengNan Rd, Shaoxing, 312000, Zhejiang, China.

The Affiliated Hospital of Shaoxing University, Shaoxing, 312000, Zhejiang, China.

出版信息

Sci Rep. 2025 Jul 28;15(1):27388. doi: 10.1038/s41598-025-11632-4.

Abstract

Deep learning techniques have become pivotal in medical image segmentation, but their success often relies on large, manually annotated datasets, which are expensive and labor-intensive to obtain. Additionally, different segmentation tasks frequently require retraining models from scratch, resulting in substantial computational costs. To address these limitations, we propose PDoRA, an innovative parameter-efficient fine-tuning method that leverages knowledge transfer from a pre-trained SwinUNETR model for a wide range of brain image segmentation tasks. PDoRA minimizes the reliance on extensive data annotation and computational resources by decomposing model weights into principal and residual weights. The principal weights are further divided into magnitude and direction, enabling independent fine-tuning to enhance the model's ability to capture task-specific features. The residual weights remain fixed and are later fused with the updated principal weights, ensuring model stability while enhancing performance. We evaluated PDoRA on three diverse medical image datasets for brain structure and metastasis segmentation. The results demonstrate that PDoRA consistently outperforms existing parameter-efficient fine-tuning methods, achieving superior segmentation accuracy and efficiency. Our code is available at https://github.com/Perfect199001/PDoRA/tree/main .

摘要

深度学习技术在医学图像分割中已变得至关重要,但其成功往往依赖于大规模的人工标注数据集,而获取这些数据集既昂贵又耗费人力。此外,不同的分割任务常常需要从头重新训练模型,这会导致高昂的计算成本。为解决这些局限性,我们提出了PDoRA,这是一种创新的参数高效微调方法,它利用预训练的SwinUNETR模型的知识迁移来处理广泛的脑图像分割任务。PDoRA通过将模型权重分解为主权重和残差权重,最大限度地减少了对大量数据标注和计算资源的依赖。主权重进一步分为幅度和方向,从而能够进行独立微调,以增强模型捕捉特定任务特征的能力。残差权重保持固定,随后与更新后的主权重融合,在提高性能的同时确保模型稳定性。我们在三个用于脑结构和转移瘤分割的不同医学图像数据集上对PDoRA进行了评估。结果表明,PDoRA始终优于现有的参数高效微调方法,实现了更高的分割精度和效率。我们的代码可在https://github.com/Perfect199001/PDoRA/tree/main获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/246f/12304152/aa1a76fb7944/41598_2025_11632_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验