Suppr超能文献

用于极限学习机的岭回归与刘回归相结合的方法

A combination of ridge and Liu regressions for extreme learning machine.

作者信息

Yıldırım Hasan, Özkale M Revan

机构信息

Department of Mathematics, Karamanoğlu Mehmetbey University, 70100 Karaman, Turkey.

Department of Statistics, Çukurova University, 01330 Adana, Turkey.

出版信息

Soft comput. 2023;27(5):2493-2508. doi: 10.1007/s00500-022-07745-x. Epub 2022 Dec 22.

Abstract

Extreme learning machine (ELM) as a type of feedforward neural network has been widely used to obtain beneficial insights from various disciplines and real-world applications. Despite the advantages like speed and highly adaptability, instability drawbacks arise in case of multicollinearity, and to overcome this, additional improvements were needed. Regularization is one of the best choices to overcome these drawbacks. Although ridge and Liu regressions have been considered and seemed effective regularization methods on ELM algorithm, each one has own characteristic features such as the form of tuning parameter, the level of shrinkage or the norm of coefficients. Instead of focusing on one of these regularization methods, we propose a combination of ridge and Liu regressions in a unified form for the context of ELM as a remedy to aforementioned drawbacks. To investigate the performance of the proposed algorithm, comprehensive comparisons have been carried out by using various real-world data sets. Based on the results, it is obtained that the proposed algorithm is more effective than the ELM and its variants based on ridge and Liu regressions, RR-ELM and Liu-ELM, in terms of the capability of generalization. Generalization performance of proposed algorithm on ELM is remarkable when compared to RR-ELM and Liu-ELM, and the generalization performance of the proposed algorithm on ELM increases as the number of nodes increases. The proposed algorithm outperforms ELM in all data sets and all node numbers in that it has a smaller norm and standard deviation of the norm. Additionally, it should be noted that the proposed algorithm can be applied for both regression and classification problems.

摘要

极限学习机(ELM)作为一种前馈神经网络,已被广泛应用于从各个学科和实际应用中获取有益的见解。尽管它具有速度快和适应性强等优点,但在多重共线性情况下会出现不稳定的缺点,为克服这一问题,需要进一步改进。正则化是克服这些缺点的最佳选择之一。虽然岭回归和刘回归已被考虑并似乎是ELM算法上有效的正则化方法,但它们各自具有自身的特征,如调整参数的形式、收缩程度或系数的范数。我们不是专注于这些正则化方法中的一种,而是针对ELM的情况以统一的形式提出岭回归和刘回归的组合,作为对上述缺点的补救措施。为了研究所提出算法的性能,使用各种实际数据集进行了全面比较。基于结果可知,在所提出算法的泛化能力方面,它比基于岭回归和刘回归的ELM及其变体RR - ELM和Liu - ELM更有效。与RR - ELM和Liu - ELM相比,所提出算法在ELM上的泛化性能显著,并且所提出算法在ELM上的泛化性能随着节点数量的增加而提高。所提出算法在所有数据集和所有节点数量上均优于ELM,因为它具有更小的范数和范数的标准差。此外,应该注意的是,所提出算法可应用于回归和分类问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2057/9774081/2bc20b782499/500_2022_7745_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验