Chen Hongtian, Liu Zhigang, Alippi Cesare, Huang Biao, Liu Derong
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6166-6179. doi: 10.1109/TNNLS.2022.3201511. Epub 2024 May 2.
The increased complexity and intelligence of automation systems require the development of intelligent fault diagnosis (IFD) methodologies. By relying on the concept of a suspected space, this study develops explainable data-driven IFD approaches for nonlinear dynamic systems. More specifically, we parameterize nonlinear systems through a generalized kernel representation for system modeling and the associated fault diagnosis. An important result obtained is a unified form of kernel representations, applicable to both unsupervised and supervised learning. More importantly, through a rigorous theoretical analysis, we discover the existence of a bridge (i.e., a bijective mapping) between some supervised and unsupervised learning-based entities. Notably, the designed IFD approaches achieve the same performance with the use of this bridge. In order to have a better understanding of the results obtained, both unsupervised and supervised neural networks are chosen as the learning tools to identify the generalized kernel representations and design the IFD schemes; an invertible neural network is then employed to build the bridge between them. This article is a perspective article, whose contribution lies in proposing and formalizing the fundamental concepts for explainable intelligent learning methods, contributing to system modeling and data-driven IFD designs for nonlinear dynamic systems.
自动化系统日益增加的复杂性和智能性要求开发智能故障诊断(IFD)方法。基于可疑空间的概念,本研究为非线性动态系统开发了可解释的数据驱动IFD方法。更具体地说,我们通过用于系统建模和相关故障诊断的广义核表示对非线性系统进行参数化。得到的一个重要结果是核表示的统一形式,适用于无监督学习和监督学习。更重要的是,通过严格的理论分析,我们发现了一些基于监督学习和无监督学习的实体之间存在一座桥梁(即双射映射)。值得注意的是,所设计的IFD方法通过使用这座桥梁实现了相同的性能。为了更好地理解所获得的结果,选择无监督和监督神经网络作为学习工具来识别广义核表示并设计IFD方案;然后使用可逆神经网络在它们之间建立桥梁。本文是一篇观点文章,其贡献在于提出并形式化了可解释智能学习方法的基本概念,为非线性动态系统的系统建模和数据驱动IFD设计做出了贡献。