Department of Electrical Engineering and Computer Science, Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, Missouri 65211, USA.
Department of Electrical Engineering and Computer Science, Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, Missouri 65211, USA
Genome Res. 2024 Oct 11;34(9):1445-1454. doi: 10.1101/gr.279132.124.
Signal peptides (SPs) play a crucial role in protein translocation in cells. The development of large protein language models (PLMs) and prompt-based learning provide a new opportunity for SP prediction, especially for the categories with limited annotated data. We present a parameter-efficient fine-tuning (PEFT) framework for SP prediction, PEFT-SP, to effectively utilize pretrained PLMs. We integrated low-rank adaptation (LoRA) into ESM-2 models to better leverage the protein sequence evolutionary knowledge of PLMs. Experiments show that PEFT-SP using LoRA enhances state-of-the-art results, leading to a maximum Matthews correlation coefficient (MCC) gain of 87.3% for SPs with small training samples and an overall MCC gain of 6.1%. Furthermore, we also employed two other PEFT methods, prompt tuning and adapter tuning, in ESM-2 for SP prediction. More elaborate experiments show that PEFT-SP using adapter tuning can also improve the state-of-the-art results by up to 28.1% MCC gain for SPs with small training samples and an overall MCC gain of 3.8%. LoRA requires fewer computing resources and less memory than the adapter tuning during the training stage, making it possible to adapt larger and more powerful protein models for SP prediction.
信号肽(SPs)在细胞中的蛋白质转运中起着至关重要的作用。大型蛋白质语言模型(PLMs)和基于提示的学习的发展为 SP 预测提供了新的机会,特别是对于那些注释数据有限的类别。我们提出了一种参数高效微调(PEFT)框架来进行 SP 预测,即 PEFT-SP,以有效地利用预训练的 PLMs。我们将低秩自适应(LoRA)集成到 ESM-2 模型中,以更好地利用 PLMs 中蛋白质序列进化知识。实验表明,使用 LoRA 的 PEFT-SP 可以提高最先进的结果,对于训练样本较小的 SP,最大马修斯相关系数(MCC)增益为 87.3%,总体 MCC 增益为 6.1%。此外,我们还在 ESM-2 中使用另外两种 PEFT 方法,即提示调优和适配器调优,进行 SP 预测。更详细的实验表明,使用适配器调优的 PEFT-SP 也可以通过高达 28.1%的 MCC 增益来提高最先进的结果,对于训练样本较小的 SP,总体 MCC 增益为 3.8%。在训练阶段,LoRA 比适配器调优需要更少的计算资源和更少的内存,这使得为 SP 预测适配更大、更强大的蛋白质模型成为可能。