Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil.
Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil; Instituto de Pesquisa Eldorado, Campinas, Brazil.
Neural Netw. 2024 Dec;180:106632. doi: 10.1016/j.neunet.2024.106632. Epub 2024 Aug 13.
The universal approximation theorem states that a neural network with one hidden layer can approximate continuous functions on compact sets with any desired precision. This theorem supports using neural networks for various applications, including regression and classification tasks. Furthermore, it is valid for real-valued neural networks and some hypercomplex-valued neural networks such as complex-, quaternion-, tessarine-, and Clifford-valued neural networks. However, hypercomplex-valued neural networks are a type of vector-valued neural network defined on an algebra with additional algebraic or geometric properties. This paper extends the universal approximation theorem for a wide range of vector-valued neural networks, including hypercomplex-valued models as particular instances. Precisely, we introduce the concept of non-degenerate algebra and state the universal approximation theorem for neural networks defined on such algebras.
通用逼近定理指出,具有单个隐藏层的神经网络可以用任意期望的精度逼近紧致集上的连续函数。这个定理支持将神经网络用于各种应用,包括回归和分类任务。此外,它对实值神经网络和一些超复数值神经网络(如复数、四元数、泰森数和克莱福德值神经网络)都是有效的。然而,超复数值神经网络是一种定义在具有附加代数或几何性质的代数上的向量值神经网络。本文将通用逼近定理扩展到了广泛的向量值神经网络,包括超复数值模型作为特例。具体来说,我们引入了非退化代数的概念,并陈述了定义在这类代数上的神经网络的通用逼近定理。