Hussain Wajahat, Mushtaq Muhammad Faheem, Shahroz Mobeen, Akram Urooj, Ghith Ehab Seif, Tlija Mehdi, Kim Tai-Hoon, Ashraf Imran
Department of Computer Science, The Islamia University of Bahawalpur, Bahawalpur, Punjab, Pakistan.
Department of Artificial Intelligence, The Islamia University of Bahawalpur, Bahawalpur, Punjab, 63100, Pakistan.
Sci Rep. 2025 Jan 6;15(1):1003. doi: 10.1038/s41598-024-76178-3.
Model optimization is a problem of great concern and challenge for developing an image classification model. In image classification, selecting the appropriate hyperparameters can substantially boost the model's ability to learn intricate patterns and features from complex image data. Hyperparameter optimization helps to prevent overfitting by finding the right balance between complexity and generalization of a model. The ensemble genetic algorithm and convolutional neural network (EGACNN) are proposed to enhance image classification by fine-tuning hyperparameters. The convolutional neural network (CNN) model is combined with a genetic algorithm GA) using stacking based on the Modified National Institute of Standards and Technology (MNIST) dataset to enhance efficiency and prediction rate on image classification. The GA optimizes the number of layers, kernel size, learning rates, dropout rates, and batch sizes of the CNN model to improve the accuracy and performance of the model. The objective of this research is to improve the CNN-based image classification system by utilizing the advantages of ensemble learning and GA. The highest accuracy is obtained using the proposed EGACNN model which is 99.91% and the ensemble CNN and spiking neural network (CSNN) model shows an accuracy of 99.68%. The ensemble approaches like EGACNN and CSNN tends to be more effective as compared to CNN, RNN, AlexNet, ResNet, and VGG models. The hyperparameter optimization of deep learning classification models reduces human efforts and produces better prediction results. Performance comparison with existing approaches also shows the superior performance of the proposed model.
模型优化是开发图像分类模型时备受关注且具有挑战性的问题。在图像分类中,选择合适的超参数可以显著提升模型从复杂图像数据中学习复杂模式和特征的能力。超参数优化通过在模型的复杂度和泛化能力之间找到恰当平衡来防止过拟合。提出了集成遗传算法和卷积神经网络(EGACNN)来通过微调超参数增强图像分类。基于改进的美国国家标准与技术研究院(MNIST)数据集,使用堆叠方法将卷积神经网络(CNN)模型与遗传算法(GA)相结合,以提高图像分类的效率和预测率。GA对CNN模型的层数、内核大小、学习率、随机失活率和批量大小进行优化,以提高模型的准确性和性能。本研究的目的是利用集成学习和GA的优势改进基于CNN的图像分类系统。使用所提出的EGACNN模型获得了最高准确率,为99.91%,而集成CNN和脉冲神经网络(CSNN)模型的准确率为99.68%。与CNN、循环神经网络(RNN)、AlexNet、ResNet和VGG模型相比,像EGACNN和CSNN这样的集成方法往往更有效。深度学习分类模型的超参数优化减少了人力并产生了更好的预测结果。与现有方法的性能比较也显示了所提出模型的卓越性能。