Suppr超能文献

预训练可提高跨物种基因组数据集的预测能力。

Pretraining Improves Prediction of Genomic Datasets Across Species.

作者信息

Huang Fangrui, Wang Yitong, Song Janet, Cutkosky Ashok

机构信息

Stanford University.

Boston University.

出版信息

bioRxiv. 2025 Aug 24:2025.08.20.671362. doi: 10.1101/2025.08.20.671362.

Abstract

Recent studies suggest that deep neural network models trained on thousands of human genomic datasets can accurately predict genomic features, including gene expression and chromatin accessibility. However, training these models is computation- and time-intensive, and datasets of comparable size do not exist for most other organisms. Here, we identify modifications to an existing state-of-the-art model that improve model accuracy while reducing training time and computational cost. Using this stream-lined model architecture, we investigate the ability of models pretrained on human genomic datasets to transfer performance to a variety of different tasks. Models pretrained on human data but fine-tuned on genomic datasets from diverse tissues and species achieved significantly higher prediction accuracy while significantly reducing training time compared to models trained from scratch, with Pearson correlation coefficients between experimental results and predictions as high as 0.8. Further, we found that including excessive training tasks decreased model performance and that this compromised performance could be partially but not completely rescued by fine-tuning. Thus, simplifying model architecture, applying pretrained models, and carefully considering the number of training tasks may be effective and economical techniques for building new models across data types, tissues, and species.

摘要

最近的研究表明,在数千个人类基因组数据集上训练的深度神经网络模型可以准确预测基因组特征,包括基因表达和染色质可及性。然而,训练这些模型需要大量的计算和时间,而且大多数其他生物不存在规模相当的数据集。在这里,我们确定了对现有最先进模型的修改,这些修改在提高模型准确性的同时减少了训练时间和计算成本。使用这种简化的模型架构,我们研究了在人类基因组数据集上预训练的模型将性能转移到各种不同任务的能力。与从头开始训练的模型相比,在人类数据上预训练但在来自不同组织和物种的基因组数据集上进行微调的模型实现了显著更高的预测准确性,同时显著减少了训练时间,实验结果与预测之间的皮尔逊相关系数高达0.8。此外,我们发现包含过多的训练任务会降低模型性能,并且这种性能受损可以通过微调得到部分但不是完全挽救。因此,简化模型架构、应用预训练模型以及仔细考虑训练任务的数量可能是跨数据类型、组织和物种构建新模型的有效且经济的技术。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f0/12393552/836d022a2643/nihpp-2025.08.20.671362v1-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验