Zhou Yong Quan, Niu Yan Biao, Luo Qi Fang, Jiang Ming
College of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, China.
Key Laboratory of Guangxi High Schools Complex System and Computational Intelligence, Nanning 530006, China.
Math Biosci Eng. 2020 Sep 10;17(5):5987-6025. doi: 10.3934/mbe.2020319.
This paper presents an improved teaching learning-based whale optimization algorithm (TSWOA) used the simplex method. First of all, the combination of WOA algorithm and teaching learning-based algorithm not only achieves a better balance between exploration and exploitation of WOA, but also makes whales have self-learning ability from the biological background, and greatly enriches the theory of the original WOA algorithm. Secondly, the WOA algorithm adds the simplex method to optimize the current worst unit, averting the agents to search at the boundary, and increasing the convergence accuracy and speed of the algorithm. To evaluate the performance of the improved algorithm, the TSWOA algorithm is employed to train the multi-layer perceptron (MLP) neural network. It is a difficult thing to propose a well-pleasing and valid algorithm to optimize the multi-layer perceptron neural network. Fifteen different data sets were selected from the UCI machine learning knowledge and the statistical results were compared with GOA, GSO, SSO, FPA, GA and WOA, severally. The statistical results display that better performance of TSWOA compared to WOA and several well-established algorithms for training multi-layer perceptron neural networks.
本文提出了一种使用单纯形法的改进型基于教学学习的鲸鱼优化算法(TSWOA)。首先,将鲸鱼优化算法(WOA)与基于教学学习的算法相结合,不仅在WOA的探索和利用之间实现了更好的平衡,而且从生物学背景使鲸鱼具有自我学习能力,极大地丰富了原始WOA算法的理论。其次,WOA算法添加单纯形法以优化当前最差单元,避免智能体在边界处搜索,并提高了算法的收敛精度和速度。为了评估改进算法的性能,采用TSWOA算法训练多层感知器(MLP)神经网络。提出一个令人满意且有效的算法来优化多层感知器神经网络是一件困难的事情。从UCI机器学习知识库中选择了15个不同的数据集,并将统计结果分别与GOA、GSO、SSO、FPA、GA和WOA进行比较。统计结果表明,与WOA和其他几种用于训练多层感知器神经网络的成熟算法相比,TSWOA具有更好的性能。