Papadokonstantakis Stavros, Lygeros Argyrios, Jacobsson Sven P
School of Chemical Engineering, National Technical University of Athens, Athens GR-15780, Greece.
Neural Netw. 2006 May;19(4):500-13. doi: 10.1016/j.neunet.2005.09.002. Epub 2005 Dec 13.
Neural networks (NNs) belong to 'black box' models and therefore 'suffer' from interpretation difficulties. Four recent methods inferring variable influence in NNs are compared in this paper. The methods assist the interpretation task during different phases of the modeling procedure. They belong to information theory (ITSS), the Bayesian framework (ARD), the analysis of the network's weights (GIM), and the sequential omission of the variables (SZW). The comparison is based upon artificial and real data sets of differing size, complexity and noise level. The influence of the neural network's size has also been considered. The results provide useful information about the agreement between the methods under different conditions. Generally, SZW and GIM differ from ARD regarding the variable influence, although applied to NNs with similar modeling accuracy, even when larger data sets sizes are used. ITSS produces similar results to SZW and GIM, although suffering more from the 'curse of dimensionality'.
神经网络(NNs)属于“黑箱”模型,因此存在解释困难。本文比较了最近四种推断神经网络中变量影响的方法。这些方法在建模过程的不同阶段辅助解释任务。它们分别属于信息论(ITSS)、贝叶斯框架(ARD)、网络权重分析(GIM)和变量顺序剔除(SZW)。比较基于不同大小、复杂度和噪声水平的人工数据集和真实数据集。还考虑了神经网络规模的影响。结果提供了关于不同条件下各方法之间一致性的有用信息。一般来说,尽管应用于具有相似建模精度的神经网络,即使使用更大的数据集规模,SZW和GIM在变量影响方面与ARD不同。ITSS产生的结果与SZW和GIM相似,尽管受“维数灾难”的影响更大。