Chen Shikun, Zheng Wenlong
College of Finance and Information, Ningbo University of Finance & Economics, Ningbo, China.
PLoS One. 2025 Mar 17;20(3):e0319515. doi: 10.1371/journal.pone.0319515. eCollection 2025.
Ensemble regression methods are widely used to improve prediction accuracy by combining multiple regression models, especially when dealing with continuous numerical targets. However, most ensemble voting regressors use equal weights for each base model's predictions, which can limit their effectiveness, particularly when there is no specific domain knowledge to guide the weighting. This uniform weighting approach doesn't consider that some models may perform better than others on different datasets, leaving room for improvement in optimizing ensemble performance. To overcome this limitation, we propose the RRMSE (Relative Root Mean Square Error) Voting Regressor, a new ensemble regression technique that assigns weights to each base model based on their relative error rates. By using an RRMSE-based weighting function, our method gives more importance to models that demonstrate higher accuracy, thereby enhancing the overall prediction quality. We tested the RRMSE Voting Regressor on six popular regression datasets and compared its performance with several state-of-the-art ensemble regression algorithms. The results show that the RRMSE Voting Regressor consistently achieves lower prediction errors than existing methods across all tested datasets. This improvement highlights the effectiveness of using relative error metrics for weighting in ensemble models. Our approach not only fills a gap in current ensemble regression techniques but also provides a reliable and adaptable method for boosting prediction performance in various machine learning tasks. By leveraging the strengths of individual models through smart weighting, the RRMSE Voting Regressor offers a significant advancement in the field of ensemble learning.
集成回归方法被广泛用于通过组合多个回归模型来提高预测准确性,尤其是在处理连续数值目标时。然而,大多数集成投票回归器对每个基础模型的预测使用相等的权重,这可能会限制它们的有效性,特别是在没有特定领域知识来指导加权的情况下。这种均匀加权方法没有考虑到一些模型在不同数据集上可能比其他模型表现更好,在优化集成性能方面仍有改进空间。为了克服这一限制,我们提出了相对均方根误差(RRMSE)投票回归器,这是一种新的集成回归技术,它根据每个基础模型的相对错误率为其分配权重。通过使用基于RRMSE的加权函数,我们的方法更重视表现出更高准确性的模型,从而提高整体预测质量。我们在六个流行的回归数据集上测试了RRMSE投票回归器,并将其性能与几种先进的集成回归算法进行了比较。结果表明,在所有测试数据集中,RRMSE投票回归器始终比现有方法实现更低的预测误差。这一改进突出了在集成模型中使用相对误差度量进行加权的有效性。我们的方法不仅填补了当前集成回归技术的空白,还为在各种机器学习任务中提高预测性能提供了一种可靠且适应性强的方法。通过智能加权利用单个模型的优势,RRMSE投票回归器在集成学习领域取得了重大进展。