Tejani Ghanshyam G, Sharma Sunil Kumar, Mishra Shailendra
Department of Research Analytics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, 600077, India.
Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan, 320315, Taiwan.
Sci Rep. 2025 Aug 29;15(1):31867. doi: 10.1038/s41598-025-10596-9.
Meta-heuristic optimization algorithms need a delicate balance between exploration and exploitation to search for global optima without premature convergence effectively. Parallel Sub-Class Modified Teac hing-learning-based optimization (PSC-MTLBO) is an improved version of TLBO proposed in this study to enhance search efficiency and solution accuracy. The proposed approach integrates three existing modifications-adaptive teaching factors, tutorial-based learning, and self-motivated learning-while introducing two novel enhancements: a sub-class division strategy and a challenger learners' model to enhance diversity and convergence speed. The proposed method was evaluated using three benchmark function sets (23 classical functions, 25 CEC2005 functions, and 30 CEC2014 functions) and two real-world truss topology optimization problems. Experimental results confirm that PSC-MTLBO performs better than normal TLBO, MTLBO, and other meta-heuristics such as PSO, DE, and GWO. For instance, PSC-MTLBO obtained the maximum overall rank in 80% of the test functions with the minimization of function errors by as much as 95% over traditional TLBO. In truss topology optimization, PSC-MTLBO designed lighter and more cost-effective structures with a weight reduction of 7.2% over the best solutions previously obtained. The challenger learners' model enhanced the adaptability, whereas the sub-class strategy increased the convergence and stability of results. In conclusion, PSC-MTLBO offers a remarkably efficient and scalable optimization framework and exhibits notable advances over current algorithms, with its suitability in solving complex optimization problems.
元启发式优化算法需要在探索和利用之间找到微妙的平衡,以便有效地搜索全局最优解而不会过早收敛。并行子类改进的基于教学学习的优化算法(PSC-MTLBO)是本研究中提出的TLBO的改进版本,以提高搜索效率和求解精度。所提出的方法整合了三种现有的改进——自适应教学因子、基于导师的学习和自我激励学习——同时引入了两种新颖的增强方法:子类划分策略和挑战者学习模型,以提高多样性和收敛速度。使用三个基准函数集(23个经典函数、25个CEC2005函数和30个CEC2014函数)以及两个实际桁架拓扑优化问题对所提出的方法进行了评估。实验结果证实,PSC-MTLBO的性能优于普通TLBO、MTLBO以及其他元启发式算法,如PSO、DE和GWO。例如,PSC-MTLBO在80%的测试函数中获得了最高的总体排名,与传统TLBO相比,函数误差最小化高达95%。在桁架拓扑优化中,PSC-MTLBO设计出了更轻、更具成本效益的结构,与之前获得的最佳解决方案相比,重量减轻了7.2%。挑战者学习模型增强了适应性,而子类策略提高了结果的收敛性和稳定性。总之,PSC-MTLBO提供了一个非常高效且可扩展的优化框架,与当前算法相比有显著进步,适用于解决复杂的优化问题。