Zhang Kaixuan, Wang Qinglong, Giles C Lee
Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA.
Alibaba Group, Building A2, Lane 55 Chuan He Road Zhangjiang, Pudong New District, Shanghai 200135, China.
Entropy (Basel). 2021 Jan 19;23(1):127. doi: 10.3390/e23010127.
Recently, there has been a resurgence of formal language theory in deep learning research. However, most research focused on the more practical problems of attempting to represent symbolic knowledge by machine learning. In contrast, there has been limited research on exploring the fundamental connection between them. To obtain a better understanding of the internal structures of regular grammars and their corresponding complexity, we focus on categorizing regular grammars by using both theoretical analysis and empirical evidence. Specifically, motivated by the concentric ring representation, we relaxed the original order information and introduced an entropy metric for describing the complexity of different regular grammars. Based on the entropy metric, we categorized regular grammars into three disjoint subclasses: the polynomial, exponential and proportional classes. In addition, several classification theorems are provided for different representations of regular grammars. Our analysis was validated by examining the process of learning grammars with multiple recurrent neural networks. Our results show that as expected more complex grammars are generally more difficult to learn.
最近,形式语言理论在深度学习研究中再度兴起。然而,大多数研究集中在试图通过机器学习来表示符号知识这类更实际的问题上。相比之下,探索它们之间的基本联系的研究则较为有限。为了更好地理解正则语法的内部结构及其相应的复杂性,我们通过理论分析和实证证据来对正则语法进行分类。具体而言,受同心环表示的启发,我们放宽了原始的顺序信息,并引入了一种熵度量来描述不同正则语法的复杂性。基于该熵度量,我们将正则语法分为三个不相交的子类:多项式类、指数类和比例类。此外,还针对正则语法的不同表示给出了几个分类定理。我们通过检查多个循环神经网络学习语法的过程来验证我们的分析。我们的结果表明,正如预期的那样,更复杂的语法通常更难学习。