Microsoft Research New England, Cambridge, MA 02139, USA.
Microsoft Research New England, Cambridge, MA 02139, USA.
Cell Syst. 2024 Mar 20;15(3):286-294.e2. doi: 10.1016/j.cels.2024.01.008. Epub 2024 Feb 29.
Pretrained protein sequence language models have been shown to improve the performance of many prediction tasks and are now routinely integrated into bioinformatics tools. However, these models largely rely on the transformer architecture, which scales quadratically with sequence length in both run-time and memory. Therefore, state-of-the-art models have limitations on sequence length. To address this limitation, we investigated whether convolutional neural network (CNN) architectures, which scale linearly with sequence length, could be as effective as transformers in protein language models. With masked language model pretraining, CNNs are competitive with, and occasionally superior to, transformers across downstream applications while maintaining strong performance on sequences longer than those allowed in the current state-of-the-art transformer models. Our work suggests that computational efficiency can be improved without sacrificing performance, simply by using a CNN architecture instead of a transformer, and emphasizes the importance of disentangling pretraining task and model architecture. A record of this paper's transparent peer review process is included in the supplemental information.
预先训练的蛋白质序列语言模型已被证明可以提高许多预测任务的性能,现在已被常规集成到生物信息学工具中。然而,这些模型在很大程度上依赖于转换器架构,其在运行时和内存中都与序列长度呈二次方比例增长。因此,最先进的模型在序列长度上存在限制。为了解决这个限制,我们研究了卷积神经网络(CNN)架构是否可以像转换器在蛋白质语言模型中一样有效。通过屏蔽语言模型预训练,CNN 在下游应用中与转换器竞争,并且偶尔优于转换器,同时在当前最先进的转换器模型允许的序列长度之外保持强大的性能。我们的工作表明,通过简单地使用 CNN 架构而不是转换器,在不牺牲性能的情况下可以提高计算效率,并强调了分离预训练任务和模型架构的重要性。本文透明同行评审过程的记录包含在补充信息中。