Suppr超能文献

评估物理信息神经网络中用于微分方程的单乘性神经元模型。

Evaluating single multiplicative neuron models in physics-informed neural networks for differential equations.

作者信息

Agraz Melih

机构信息

Department of Statistics, Giresun University, Giresun, 28200, Turkey.

出版信息

Sci Rep. 2024 Aug 17;14(1):19073. doi: 10.1038/s41598-024-67483-y.

Abstract

Machine learning is a prominent and highly effective field of study, renowned for its ability to yield favorable outcomes in estimation and classification tasks. Within this domain, artificial neural networks (ANNs) have emerged as one of the most powerful methodologies. Physics-informed neural networks (PINNs) have proven particularly adept at solving physics problems formulated as differential equations, incorporating boundary and initial conditions into the ANN's loss function. However, a critical challenge in ANNs lies in determining the optimal architecture, encompassing the selection of the appropriate number of neurons and layers. Traditionally, the Single Multiplicative Neuron Model (SMNM) has been explored as a solution to this issue, utilizing a single neuron with a multiplication function in the hidden layer to enhance computational efficiency. This study initially aimed to apply the SMNM within the PINNs framework, targeting the differential equation with boundary conditions and . Upon implementation, however, it was discovered that while the conventional SMNM approach was theorized to offer significant advantages, multiplicative aggregate function led to a failure in convergence. Consequently, we introduced a "mimic single multiplicative neuron model (mimic-SMNM)" employing an architecture with a single neuron, designed to simulate the SMNM's conceptual advantages while ensuring convergence and computational efficiency. Comparative analysis revealed that the real-PINNs accurately solved the equation, the true SMNM failed to converge, and the mimic model was highlighted for its architectural simplicity and computational feasibility, directly implying it is faster and more efficient than real PINNs for the solution of simple differential equations. Furthermore, our findings demonstrated that our proposed mimic-SMNM model achieves a five-times increase in computational speed compared to real PINNs after 30,000 epochs.

摘要

机器学习是一个杰出且高效的研究领域,以其在估计和分类任务中能够产生良好结果而闻名。在这个领域中,人工神经网络(ANNs)已成为最强大的方法之一。物理信息神经网络(PINNs)已被证明特别擅长解决以微分方程形式表述的物理问题,即将边界条件和初始条件纳入ANN的损失函数中。然而,ANNs中的一个关键挑战在于确定最优架构,包括选择合适的神经元数量和层数。传统上,单乘法神经元模型(SMNM)已被探索作为解决这个问题的一种方法,它在隐藏层中使用具有乘法函数的单个神经元来提高计算效率。本研究最初旨在将SMNM应用于PINNs框架内,针对具有边界条件 和 的微分方程。然而,在实施过程中发现,虽然传统的SMNM方法理论上具有显著优势,但乘法聚合函数导致了收敛失败。因此,我们引入了一种“模拟单乘法神经元模型(mimic - SMNM)”,其采用具有单个神经元的架构,旨在模拟SMNM的概念优势,同时确保收敛性和计算效率。比较分析表明,真实的PINNs准确地解决了方程,真正的SMNM未能收敛,而模拟模型因其架构简单和计算可行性而受到关注,这直接意味着在解决简单微分方程时它比真实的PINNs更快、更高效。此外,我们的研究结果表明,在经过30000个训练周期后,我们提出的mimic - SMNM模型与真实的PINNs相比,计算速度提高了五倍。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/11330505/4d3210ddf100/41598_2024_67483_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验