Suppr超能文献

通过可微替代函数优化非光滑函数。

Optimization of non-smooth functions via differentiable surrogates.

作者信息

Chen Shikun, Huang Zebin, Zheng Wenlong

机构信息

College of Finance and Information, Ningbo University of Finance & Economics, Ningbo, China.

School of Management, Xi'an University of Finance & Economics, Xi'an, China.

出版信息

PLoS One. 2025 May 30;20(5):e0321862. doi: 10.1371/journal.pone.0321862. eCollection 2025.

Abstract

Mathematical optimization is fundamental across many scientific and engineering applications. While data-driven models like gradient boosting and random forests excel at prediction tasks, they often lack mathematical regularity, being non-differentiable or even discontinuous. These models are commonly used to predict outputs based on a combination of fixed parameters and adjustable variables. A key transition in optimization involves moving beyond simple prediction to determine optimal variable values. Specifically, the challenge lies in identifying values of adjustable variables that maximize the output quality according to the model's predictions, given a set of fixed parameters. To address this challenge, we propose a method that combines XGBoost's superior prediction accuracy with neural networks' differentiability as optimization surrogates. The approach leverages gradient information from neural networks to guide SLSQP optimization while maintaining XGBoost's prediction precision. Through extensive testing on classical optimization benchmarks including Rosenbrock, Levy, and Rastrigin functions with varying dimensions and constraint conditions, we demonstrate that our method achieves solutions up to 40% better than traditional methods while reducing computation time by orders of magnitude. The framework consistently maintains near-zero constraint violations across all test cases, even as problem complexity increases. This approach bridges the gap between model accuracy and optimization efficiency, offering a practical solution for optimizing non-differentiable machine learning models that can be extended to other tree-based ensemble algorithms. The method has been successfully applied to real-world steel alloy optimization, where it achieved superior performance while maintaining all metallurgical composition constraints.

摘要

数学优化在许多科学和工程应用中都至关重要。虽然梯度提升和随机森林等数据驱动模型在预测任务方面表现出色,但它们往往缺乏数学规律性,不可微甚至不连续。这些模型通常用于根据固定参数和可调变量的组合来预测输出。优化中的一个关键转变是从简单预测转向确定最优变量值。具体而言,挑战在于在给定一组固定参数的情况下,识别出根据模型预测能使输出质量最大化的可调变量值。为应对这一挑战,我们提出一种方法,该方法将XGBoost卓越的预测精度与神经网络的可微性结合起来作为优化替代方案。该方法利用神经网络的梯度信息来指导SLSQP优化,同时保持XGBoost的预测精度。通过对包括不同维度和约束条件的Rosenbrock、Levy和Rastrigin函数等经典优化基准进行广泛测试,我们证明我们的方法比传统方法能获得高达40%的更好解决方案,同时将计算时间减少几个数量级。即使问题复杂度增加,该框架在所有测试用例中始终保持约束违反几乎为零。这种方法弥合了模型精度和优化效率之间的差距,为优化不可微机器学习模型提供了一种实用解决方案,该方案可扩展到其他基于树的集成算法。该方法已成功应用于实际的钢合金优化,在保持所有冶金成分约束的同时取得了卓越性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d54f/12124525/ec3c8fa65057/pone.0321862.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验