Zhang Ruiyi, Somayajula Sai Ashish, Xie Pengtao
UC San Diego.
Transact Mach Learn Res. 2025 Jun;2025.
Large-scale general domain pretraining followed by downstream-specific finetuning has become a predominant paradigm in machine learning. However, discrepancies between the pretraining and target domains can still lead to performance degradation in certain cases, underscoring the need for task-adaptive continued pretraining (TAP). TAP methods typically involve continued pretraining on task-specific unlabeled datasets or introducing additional unsupervised learning objectives to enhance model capabilities. While many TAP methods perform continued pretraining with multiple pretraining objectives, they often determine the tradeoff parameters between objectives manually, resulting in suboptimal outcomes and higher computational costs. In this paper, we propose TapWeight, a task-adaptive pretraining framework which automatically determines the optimal importance of each pretraining objective based on downstream feedback. TapWeight reweights each pretraining objective by solving a multi-level optimization problem. We applied TapWeight to both molecular property prediction and natural language processing tasks, significantly surpassing baseline methods. Experimental results validate the effectiveness and generalizability of TapWeight. Our code is available at https://github.com/ruz048/TapWeight.
先进行大规模通用领域预训练,然后进行特定于下游任务的微调,已成为机器学习中的一种主要范式。然而,预训练域和目标域之间的差异在某些情况下仍可能导致性能下降,这凸显了任务自适应持续预训练(TAP)的必要性。TAP方法通常包括在特定于任务的未标记数据集上进行持续预训练,或引入额外的无监督学习目标以增强模型能力。虽然许多TAP方法使用多个预训练目标进行持续预训练,但它们通常手动确定目标之间的权衡参数,导致结果次优且计算成本更高。在本文中,我们提出了TapWeight,这是一个任务自适应预训练框架,它基于下游反馈自动确定每个预训练目标的最佳重要性。TapWeight通过解决一个多级优化问题来重新权衡每个预训练目标。我们将TapWeight应用于分子性质预测和自然语言处理任务,显著超越了基线方法。实验结果验证了TapWeight的有效性和通用性。我们的代码可在https://github.com/ruz048/TapWeight获取。