Daoud Ahmed, Ben-Hur Asa
Department of Computer Science, Colorado State University, Fort Collins, Colorado, United States of America.
PLoS Comput Biol. 2025 Jan 10;21(1):e1012755. doi: 10.1371/journal.pcbi.1012755. eCollection 2025 Jan.
Complex deep learning models trained on very large datasets have become key enabling tools for current research in natural language processing and computer vision. By providing pre-trained models that can be fine-tuned for specific applications, they enable researchers to create accurate models with minimal effort and computational resources. Large scale genomics deep learning models come in two flavors: the first are large language models of DNA sequences trained in a self-supervised fashion, similar to the corresponding natural language models; the second are supervised learning models that leverage large scale genomics datasets from ENCODE and other sources. We argue that these models are the equivalent of foundation models in natural language processing in their utility, as they encode within them chromatin state in its different aspects, providing useful representations that allow quick deployment of accurate models of gene regulation. We demonstrate this premise by leveraging the recently created Sei model to develop simple, interpretable models of intron retention, and demonstrate their advantage over models based on the DNA language model DNABERT-2. Our work also demonstrates the impact of chromatin state on the regulation of intron retention. Using representations learned by Sei, our model is able to discover the involvement of transcription factors and chromatin marks in regulating intron retention, providing better accuracy than a recently published custom model developed for this purpose.
在非常大的数据集上训练的复杂深度学习模型已成为当前自然语言处理和计算机视觉研究的关键支持工具。通过提供可针对特定应用进行微调的预训练模型,它们使研究人员能够以最少的工作量和计算资源创建准确的模型。大规模基因组深度学习模型有两种类型:第一种是类似于相应自然语言模型的以自监督方式训练的DNA序列大语言模型;第二种是利用来自ENCODE和其他来源的大规模基因组数据集的监督学习模型。我们认为,这些模型在效用上等同于自然语言处理中的基础模型,因为它们在其中编码了染色质状态的不同方面,提供了有用的表示,从而允许快速部署准确的基因调控模型。我们通过利用最近创建的Sei模型来开发简单、可解释的内含子保留模型来证明这一前提,并证明它们相对于基于DNA语言模型DNABERT-2的模型的优势。我们的工作还证明了染色质状态对内含子保留调控的影响。使用Sei学习的表示,我们的模型能够发现转录因子和染色质标记在调节内含子保留中的作用,比最近为此目的开发的定制模型提供了更高的准确性。