Sgarbossa Damiano, Malbranke Cyril, Bitbol Anne-Florence
Institute of Bioengineering, School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland.
SIB Swiss Institute of Bioinformatics, CH-1015 Lausanne, Switzerland.
Bioinformatics. 2025 Jun 2;41(6). doi: 10.1093/bioinformatics/btaf348.
Protein language models are enabling advances in elucidating the sequence-to-function mapping, and have important applications in protein design. Models based on multiple sequence alignments efficiently capture the evolutionary information in homologous protein sequences, but multiple sequence alignment construction is imperfect.
We present ProtMamba, a homology-aware but alignment-free protein language model based on the Mamba architecture. In contrast with attention-based models, ProtMamba efficiently handles very long context, comprising hundreds of protein sequences. It is also computationally efficient. We train ProtMamba on a large dataset of concatenated homologous sequences, using two GPUs. We combine autoregressive modeling and masked language modeling through a fill-in-the-middle training objective. This makes the model adapted to various protein design applications. We demonstrate ProtMamba's usefulness for sequence generation, motif inpainting, fitness prediction, and modeling intrinsically disordered regions. For homolog-conditioned sequence generation, ProtMamba outperforms state-of-the-art models. ProtMamba's competitive performance, despite its relatively small size, sheds light on the importance of long-context conditioning.
A Python implementation of ProtMamba is freely available in our GitHub repository: https://github.com/Bitbol-Lab/ProtMamba-ssm and archived at https://doi.org/10.5281/zenodo.15584634.
蛋白质语言模型正在推动在阐明序列到功能映射方面取得进展,并在蛋白质设计中具有重要应用。基于多序列比对的模型能够有效地捕捉同源蛋白质序列中的进化信息,但多序列比对构建并不完美。
我们提出了ProtMamba,一种基于曼巴架构的同源性感知但无比对的蛋白质语言模型。与基于注意力的模型不同,ProtMamba能够有效地处理包含数百个蛋白质序列的非常长的上下文。它在计算上也很高效。我们使用两个GPU在一个由串联同源序列组成的大型数据集上训练ProtMamba。我们通过中间填充训练目标将自回归建模和掩码语言建模相结合。这使得该模型适用于各种蛋白质设计应用。我们展示了ProtMamba在序列生成、基序修复、适应性预测和内在无序区域建模方面的有用性。对于同源条件序列生成,ProtMamba优于现有模型。尽管ProtMamba规模相对较小,但其具有竞争力的性能揭示了长上下文条件的重要性。
ProtMamba的Python实现可在我们的GitHub存储库中免费获取:https://github.com/Bitbol-Lab/ProtMamba-ssm,并保存在https://doi.org/10.5281/zenodo.15584634。