Suppr超能文献

预训练抗体语言模型的监督微调可提高抗原特异性预测能力。

Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.

作者信息

Wang Meng, Patsenker Jonathan, Li Henry, Kluger Yuval, Kleinstein Steven H

机构信息

Program in Computational Biology and Bioinformatics, Yale University, New Haven, Connecticut, United States of America.

Program in Applied Mathematics, Yale University, New Haven, Connecticut, United States of America.

出版信息

bioRxiv. 2024 May 13:2024.05.13.593807. doi: 10.1101/2024.05.13.593807.

Abstract

Antibodies play a crucial role in adaptive immune responses by determining B cell specificity to antigens and focusing immune function on target pathogens. Accurate prediction of antibody-antigen specificity directly from antibody sequencing data would be a great aid in understanding immune responses, guiding vaccine design, and developing antibody-based therapeutics. In this study, we present a method of supervised fine-tuning for antibody language models, which improves on previous results in binding specificity prediction to SARS-CoV-2 spike protein and influenza hemagglutinin. We perform supervised fine-tuning on four pre-trained antibody language models to predict specificity to these antigens and demonstrate that fine-tuned language model classifiers exhibit enhanced predictive accuracy compared to classifiers trained on pre-trained model embeddings. The change of model attention activations after supervised fine-tuning suggested that this performance was driven by an increased model focus on the complementarity determining regions (CDRs). Application of the supervised fine-tuned models to BCR repertoire data demonstrated that these models could recognize the specific responses elicited by influenza and SARS-CoV-2 vaccination. Overall, our study highlights the benefits of supervised fine-tuning on pre-trained antibody language models as a mechanism to improve antigen specificity prediction.

摘要

抗体通过确定B细胞对抗原的特异性并将免疫功能集中于目标病原体,在适应性免疫反应中发挥关键作用。直接从抗体测序数据准确预测抗体-抗原特异性,将极大有助于理解免疫反应、指导疫苗设计以及开发基于抗体的治疗方法。在本研究中,我们提出了一种用于抗体语言模型的监督微调方法,该方法在预测与严重急性呼吸综合征冠状病毒2(SARS-CoV-2)刺突蛋白和流感血凝素的结合特异性方面比先前的结果有所改进。我们对四个预训练的抗体语言模型进行监督微调,以预测对这些抗原的特异性,并证明与在预训练模型嵌入上训练的分类器相比,微调后的语言模型分类器表现出更高的预测准确性。监督微调后模型注意力激活的变化表明,这种性能是由模型对互补决定区(CDR)的关注度增加所驱动的。将监督微调模型应用于B细胞受体(BCR)库数据表明,这些模型能够识别由流感和SARS-CoV-2疫苗接种引发的特异性反应。总体而言,我们的研究突出了对预训练抗体语言模型进行监督微调作为提高抗原特异性预测机制的益处。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f008/11118465/8519c1db31a0/nihpp-2024.05.13.593807v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验