Suppr超能文献

通过超参数优化注意过拟合!

Be aware of overfitting by hyperparameter optimization!

作者信息

Tetko Igor V, van Deursen Ruud, Godin Guillaume

机构信息

Institute of Structural Biology, Molecular Targets and Therapeutics Center, Helmholtz Munich - Deutsches Forschungszentrum Für Gesundheit Und Umwelt (GmbH), 86764, Neuherberg, Germany.

BIGCHEM GmbH, Valerystr. 49, 85716, Unterschleißheim, Germany.

出版信息

J Cheminform. 2024 Dec 9;16(1):139. doi: 10.1186/s13321-024-00934-w.

Abstract

Hyperparameter optimization is very frequently employed in machine learning. However, an optimization of a large space of parameters could result in overfitting of models. In recent studies on solubility prediction the authors collected seven thermodynamic and kinetic solubility datasets from different data sources. They used state-of-the-art graph-based methods and compared models developed for each dataset using different data cleaning protocols and hyperparameter optimization. In our study we showed that hyperparameter optimization did not always result in better models, possibly due to overfitting when using the same statistical measures. Similar results could be calculated using pre-set hyperparameters, reducing the computational effort by around 10,000 times. We also extended the previous analysis by adding a representation learning method based on Natural Language Processing of smiles called Transformer CNN. We show that across all analyzed sets using exactly the same protocol, Transformer CNN provided better results than graph-based methods for 26 out of 28 pairwise comparisons by using only a tiny fraction of time as compared to other methods. Last but not least we stressed the importance of comparing calculation results using exactly the same statistical measures.Scientific Contribution We showed that models with pre-optimized hyperparameters can suffer from overfitting and that using pre-set hyperparameters yields similar performances but four orders faster. Transformer CNN provided significantly higher accuracy compared to other investigated methods.

摘要

超参数优化在机器学习中经常被使用。然而,对大量参数空间进行优化可能会导致模型过拟合。在最近关于溶解度预测的研究中,作者从不同数据源收集了七个热力学和动力学溶解度数据集。他们使用了基于图的先进方法,并比较了使用不同数据清理协议和超参数优化为每个数据集开发的模型。在我们的研究中,我们表明超参数优化并不总是能得到更好的模型,这可能是由于使用相同统计量时出现了过拟合。使用预先设定的超参数也能得到类似的结果,计算量可减少约10000倍。我们还通过添加一种基于微笑的自然语言处理的表示学习方法(称为Transformer CNN)扩展了先前的分析。我们表明,在所有使用完全相同协议的分析集中,Transformer CNN在28个成对比较中的26个中比基于图的方法提供了更好的结果,并且与其他方法相比仅使用了极少的时间。最后但同样重要的是,我们强调了使用完全相同的统计量来比较计算结果的重要性。科学贡献我们表明,具有预优化超参数的模型可能会过拟合,而使用预先设定的超参数能产生类似的性能,但速度快四个数量级。与其他研究方法相比,Transformer CNN提供了显著更高的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4def/11629497/88434e19fc8f/13321_2024_934_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验