通过均匀模拟退火使用图卷积网络对分子中的原子进行分类的启发式优化。

Heuristic optimization in classification atoms in molecules using GCN via uniform simulated annealing.

作者信息

Polowczyk Agnieszka, Polowczyk Alicja, Woźniak Marcin

机构信息

Faculty of Applied Mathematics, Silesian University of Technology, Gliwice, 44-100, Poland.

出版信息

Sci Rep. 2025 May 20;15(1):17519. doi: 10.1038/s41598-025-00340-8.

Abstract

Graph neural networks are becoming increasingly popular in deep learning due to their ability to process data in irregular structures and graphs thus preserving additional spatial dependencies due to the arrangement of nodes. This network allows for better accuracy results in classification problems, where information from neighbors is also crucial. However, training such a model is complex and time-consuming and challenging. To date, some metaheuristic algorithms have been used primarily to optimize convolutional networks and find suitable hyperparameters, these are: genetic algorithm, particle swarm optimization, differential evolution or covariance matrix adaptation evolution strategy. In this paper, we propose a metaheuristic algorithm of Simulated Annealing with Uniform distribution for optimization of weights in GCNs, as a hybrid in combination with gradient optimizers. The performance of our technique was tested on the QM7 Dataset, where it was split into two datasets: imbalanced and balanced. Experimental results confirm that our proposed optimization method outperformed other standalone SOTA optimization models, including gradient and heuristics methods, demonstrating in each case to lower loss function values, higher accuracy values for balanced dataset and higher AUC (macro) values for imbalanced dataset.

摘要

图神经网络在深度学习中越来越受欢迎,因为它们能够处理不规则结构和图中的数据,从而由于节点的排列而保留额外的空间依赖性。这种网络在分类问题中能产生更好的准确率结果,其中来自邻居的信息也至关重要。然而,训练这样的模型是复杂、耗时且具有挑战性的。迄今为止,一些元启发式算法主要用于优化卷积网络并找到合适的超参数,这些算法包括:遗传算法、粒子群优化、差分进化或协方差矩阵自适应进化策略。在本文中,我们提出一种具有均匀分布的模拟退火元启发式算法,用于优化图卷积网络中的权重,作为与梯度优化器相结合的混合方法。我们的技术性能在QM7数据集上进行了测试,该数据集被分为两个数据集:不平衡数据集和平衡数据集。实验结果证实,我们提出的优化方法优于其他独立的最优优化模型,包括梯度和启发式方法,在每种情况下都表现出更低的损失函数值、平衡数据集的更高准确率值以及不平衡数据集的更高AUC(宏)值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/02bf/12092647/f1cdcfd70dc1/41598_2025_340_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索