Olson Daniel, Colligan Thomas, Demekas Daphne, Roddy Jack W, Youens-Clark Ken, Wheeler Travis J
Department of Computer Science, University of Montana, Missoula, MT 59812, United States.
College of Pharmacy, University of Arizona, Tucson, AZ 85721, United States.
Bioinformatics. 2025 Jul 1;41(Supplement_1):i449-i457. doi: 10.1093/bioinformatics/btaf198.
Protein language models (PLMs) have recently demonstrated potential to supplant classical protein database search methods based on sequence alignment, but are slower than common alignment-based tools and appear to be prone to a high rate of false labeling. Here, we present Neural Embeddings for Amino acid Relationships (NEAR), a method based on neural representation learning that is designed to improve both speed and accuracy of search for likely homologs in a large protein sequence database. NEAR's ResNet embedding model is trained using contrastive learning guided by trusted sequence alignments. It computes per-residue embeddings for target and query protein sequences, and identifies alignment candidates with a pipeline consisting of residue-level k-NN search and a simple neighbor aggregation scheme. Tests on a benchmark consisting of trusted remote homologs and randomly shuffled decoy sequences reveal that NEAR substantially improves accuracy relative to state-of-the-art PLMs, with lower memory requirements and faster embedding and search speed. While these results suggest that the NEAR model may be useful for standalone homology detection with increased sensitivity over standard alignment-based methods, in this manuscript, we focus on a more straightforward analysis of the model's value as a high-speed pre-filter for sensitive annotation. In that context, NEAR is at least 5x faster than the pre-filter currently used in the widely used profile hidden Markov model (pHMM) search tool HMMER3, and also outperforms the pre-filter used in our fast pHMM tool, nail.
NEAR is under an open-source license. Code and data curation instructions can be found at https://github.com/TravisWheelerLab/NEAR.
蛋白质语言模型(PLMs)最近已显示出取代基于序列比对的传统蛋白质数据库搜索方法的潜力,但比常见的基于比对的工具速度更慢,并且似乎容易出现高误标率。在此,我们提出了氨基酸关系的神经嵌入(NEAR),这是一种基于神经表示学习的方法,旨在提高在大型蛋白质序列数据库中搜索可能的同源物的速度和准确性。NEAR的ResNet嵌入模型使用由可信序列比对引导的对比学习进行训练。它为目标和查询蛋白质序列计算每个残基的嵌入,并通过由残基级k近邻搜索和简单邻居聚合方案组成的管道识别比对候选物。对由可信的远程同源物和随机打乱的诱饵序列组成的基准测试表明,相对于最先进的PLMs,NEAR显著提高了准确性,同时具有更低的内存需求以及更快的嵌入和搜索速度。虽然这些结果表明NEAR模型可能有助于独立的同源性检测,其灵敏度高于基于标准比对的方法,但在本手稿中,我们专注于对该模型作为敏感注释的高速预过滤器的价值进行更直接的分析。在这种情况下,NEAR比广泛使用的轮廓隐马尔可夫模型(pHMM)搜索工具HMMER3中当前使用的预过滤器至少快5倍,并且也优于我们的快速pHMM工具nail中使用的预过滤器。
NEAR遵循开源许可。代码和数据管理说明可在https://github.com/TravisWheelerLab/NEAR找到。